Scarlett Johansson raised concerns about the “immediate threat of AI” following the circulation of deepfake videos featuring her and other well-known Jewish celebrities in response to recent anti-Semitic comments made by Kanye West.
The deepfake video showcased AI-generated versions of numerous celebrities, such as Johansson, David Schwimmer, Jerry Seinfeld, Drake, Adam Sandler, Steven Spielberg, and Mila Kunis.
It began with a deepfake representation of Johansson wearing a t-shirt with raised hands and fingers adorned with the Star of David and Kanye’s name. The video was set to the tune of “Habanagira,” a traditional Jewish song typically played at celebratory cultural events and concluded with a message urging viewers to join the fight against anti-Semitism.
Other celebrities depicted in the video included Sacha Baron Cohen, Jack Black, Natalie Portman, Adam Levine, Ben Stiller, and Lenny Kravitz.
Johansson expressed her distress over the dissemination of AI-generated videos featuring her likeness online in response to anti-Semitic sentiments. In a statement to People, she stated, “As a Jewish woman, I unequivocally denounce all forms of anti-Semitism and hate speech. However, I believe the potential dangers posed by hate speech-enabled AI are far more concerning. We must hold AI accountable, as it presents a significant threat. Regardless of the AI’s message, there is a risk of inciting misuse of AI or real-life repercussions.”
A user known as Nishi made derogatory remarks, self-identifying as a “Nazi,” and lauding Hitler on social media before deactivating their account.
Nishi also featured in advertisements during the Super Bowl and directed viewers to their website, which was subsequently shut down by Shopify for policy violations. Fox TV station CEO Jack Abernethy also criticized the ads in a memo to staff.
Johansson has been an outspoken advocate against the unauthorized use of AI. She previously threatened legal action against OpenAI for using a voice resembling hers in their ChatGPT product. OpenAI eventually removed the prominently featured audio option from ChatGPT following significant backlash.
Johansson emphasized, “While I have been a prominent target of AI misuse, the reality is that the threat of AI affects us all.”
She further stated, “There is a pressing need for progressive nations to enact regulations safeguarding citizens from the imminent perils posed by AI, which regrettably the US government appears inert in addressing. It is alarming that the US lags in taking action.”
The actor urged lawmakers to enact legislation combating AI abuse, highlighting it as “a bipartisan issue with profound implications for humanity’s immediate future.”
The AI-generated video was created by Ori Bejerano, as indicated in his Instagram Bio. His original post noted that the content was digitally altered or generated with AI to create a realistic appearance.
WhatsApp recently won a legal battle against NSO Group, an Israeli cyberwareponds manufacturer. Despite this victory, a new threat has emerged from another company, Paragon Solutions, which is also based in Israel, including the United States.
In January, WhatsApp revealed that 90 users, including journalists and civil society members, were targeted by SPYware created by Paragon Solutions last year. This raises concerns about how Paragon’s government clients utilize hacking tools.
Among the targeted individuals were Italian journalist Francesco Cancerato, immigrant support NGO founder Luca Casarini, and Libyan activist Husam El Gomati. University of Toronto researchers, who work closely with WhatsApp, plan to release a technical report on the breach.
Paragon, like NSO Group, provides spyware to government agencies. The spyware, known as Graphite, allows for hacking without the user’s knowledge, granting access to photos and encrypted messages. Paragon claims its use aligns with US policies for national security missions.
Paragon stated a zero-tolerance policy for violations and terminated contracts with Italy after breaching terms. David Kay, a former special rapporteur, described the marketing of such surveillance products as an abuse and a threat to the rule of law.
The issue has relevance in the US, where the Biden administration blacklisted NSO in 2021 due to reports of abuse. A contract between ICE and Paragon was suspended after concerns were raised about spyware use.
Paragon assures compliance with US laws and regulations, following the Biden executive order. The company, now US-owned, has a subsidiary in Virginia. Concerns remain about potential misuse against political opponents.
Experts from Citizen Lab and Amnestytech are vigilant in detecting illegal surveillance in democracies worldwide.
DeepSeek has been banned from all federal equipment in order to crack down on Chinese AI chatbots due to an unspecified national security risk.
Last week, DeepSeek’s AI Generative Chatbot was released, causing concern in US high-tech circles regarding censorship and data security.
The Ministry of Home Affairs issued an order on Tuesday prohibiting the use of the program on all federal government systems and national security devices based on intelligence agency advice.
The Minister of the Interior emphasized that the decision was based on protecting the government and its assets, not influenced by China as a country of origin.
Bark stated, “The Albanese government is taking swift and decisive actions to safeguard Australia’s national security and interests.”
He added, “AI presents potential and opportunities, but the government will not hesitate to act if national security risks are identified.”
It is advised that governments and organizations promptly report and remove the app from their devices to prevent reinstallation.
This decision comes nearly two years after the Albanese government banned the Chinese social media app TikTok across government devices citing security and privacy risks.
In January, Minister of Science Ed Hepsick anticipated a similar debate surrounding DeepSeek.
He stated, “I believe there will be parallels drawn naturally. There is a resemblance to the discussions seen around TikTok with regard to DeepSeek.”
Australia joins Taiwan, Italy, and some US states in blocking and banning apps on government devices.
This week, the New South Wales state government has banned the application. Other state governments are also considering similar actions.
An analysis by Guardian Australia in January revealed that chatbots like DeepSeek have avoided discussing specific political events in the Chinese government.
In contrast to other models, DeepSeek did not engage in conversations about topics such as Tiananmen Square and The Umbrella Revolution when asked.
Immediately after its release in January, DeepSeek became popular in the global app store, causing a significant drop in a major US Tech Index.
A groundbreaking report by AI experts suggests that the risk of artificial intelligence systems being used for malicious purposes is on the rise. Researchers, particularly in DeepSeek and other similar organizations, are concerned about safety risks which may escalate.
Yoshua Bengio, a prominent figure in the AI field, views the progress of China’s DeepSeek startup with apprehension as it challenges the dominance of the United States in the industry.
“This leads to a tighter competition, which is concerning from a safety standpoint,” voiced Bengio.
He cautioned that American companies and competitors need to focus on overtaking DeepSeek to ensure safety and maintain their lead. Openai, known for Chatgpt, responded by hastening the release of a new virtual assistant to keep up with DeepSeek’s advancements.
In a wide-ranging discussion on AI safety, Bengio stressed the importance of understanding the implications of the latest safety report on AI. The report, spearheaded by a group of 96 experts and endorsed by renowned figures like Jeffrey Hinton, sheds light on the potential misuse of general-purpose AI systems for malicious intents.
One of the highlighted risks is the development of AI models capable of generating hazardous substances beyond the expertise of human experts. While these advancements have potential benefits in medicine, there is also a concern about their misuse.
Although AI systems have become more adept at identifying software vulnerabilities independently, the report emphasizes the need for caution in the face of escalating cyber threats orchestrated by hackers.
Additionally, the report discusses the risks associated with AI technologies like Deep Fake, which can be exploited for fraudulent activities, including financial scams, misinformation, and creating explicit content.
Furthermore, the report flags the vulnerability of closed-source AI models to security breaches, highlighting the potential for malicious use if not regulated effectively.
In light of recent advancements like the O3 model by OPENAI, Bengio underscores the need for a thorough risk assessment to comprehend the evolving landscape of AI capabilities and associated risks.
While AI innovations hold promise for transforming various industries, there is a looming concern about their potential misuse, particularly by malicious actors seeking to exploit autonomous AI for nefarious purposes.
It is essential to address these risks proactively to mitigate the threats posed by AI developments and ensure that the technology is harnessed for beneficial purposes.
As society navigates the uncertainties surrounding AI advancements, there is a collective responsibility to shape the future trajectory of this transformative technology.
TikTok’s ability to provide “uplifting” content and its impact on UK-China relations have raised concerns for the UK government regarding the use of data of millions of Britons, according to the technology secretary. The acceptance of video apps is being shaped by these concerns, the secretary stated.
After a US court upheld legislation that could potentially result in TikTok being banned or sold in the US, Peter Kyle expressed his worries about the platform’s data usage in relation to ownership models. “I am genuinely concerned about their use of data in relation to ownership models,” he told the Guardian.
However, following President Donald Trump’s executive order temporarily suspending the US ban for 75 days, Kyle referred to TikTok as a “desirable product” that enables young people to embrace different cultures and ideologies freely. He emphasized the importance of exploring new things and finding the right balance between the euphoria TikTok offers and potential concerns about Chinese propaganda.
A recent study from Rutgers University indicated that heavy users of TikTok in the US demonstrated an increase in pro-China attitudes by around 50%. There are fears that the Chinese government could access the data collected by the app. TikTok claimed to use moderation algorithms to remove content related to alleged abuses by the Chinese Communist Party and the suppression of anti-China material.
The study concluded that TikTok’s content aligns with the Chinese Communist Party’s goal of shaping favorable perceptions among young viewers, potentially influencing users through psychological manipulation. It described TikTok as a “flawed experiment.”
In response to these findings, Kyle urged caution when using TikTok, highlighting the presence of bias in editorial decisions made by various platforms and broadcasters. He emphasized the government’s commitment to monitoring social media trends and taking action if necessary to safeguard national security.
When asked about concerns regarding TikTok as a propaganda tool, Kyle stated that any actions taken by the government would be made public. He also mentioned being mindful of China’s relationships with other countries, clarifying that his comments were not specifically directed at China.
Regarding the ban on TikTok in the US, Kyle noted the potential risks associated with using the Chinese version of the app, which could involve data collection and the dissemination of propaganda. He expressed concerns about the implications of such actions.
A representative from TikTok emphasized that the UK app is operated by a UK-registered and regulated company, investing £10bn to ensure user data protection in the UK and Europe through independent monitoring and verification of data security.
The Chinese government stated that it does not hold shares or ownership in ByteDance, TikTok’s parent company, which is majority-owned by foreign investors. The founder, Zhang Yiming, owns 20% of the company.
In 2018, Mr. Zhang posted a “self-confession” announcing the shutdown of an app due to content conflicting with core socialist values and failing to guide public opinion properly. Following criticism on state television, he acknowledged corporate weaknesses and the need for a better understanding and implementation of political theories promoted by Chinese Communist Party leader Xi Jinping.
The union representing tech workers in the UK expresses concerns on behalf of British staff at Meta about the company’s decision to eliminate fact-checkers and diversity, equity, and inclusion programs. They feel disappointed and worried about the future direction of the company.
Prospect union, which represents a growing number of UK Meta employees, has written to express these concerns to the company, highlighting the disappointment among long-time employees. They fear this change in approach may impact Meta’s ability to attract and retain talent, affecting both employees and the company’s reputation.
In a letter to Meta’s human resources director for EMEA, the union warns about potential challenges in recruiting and retaining staff following the recent announcements of job cuts and performance management system changes at Meta.
The union also seeks assurances that employees with protected characteristics, especially those from the LGBTQ+ community, will not be disadvantaged by the policy changes. They call for Meta to collaborate with unions to create a safe and inclusive workplace.
Employees are concerned about the removal of fact-checkers and increased political content on Meta’s platform, fearing it may lead to a hostile work environment. They highlight the importance of maintaining a culture of respect and achievement at Meta.
Referencing the government’s Employment Rights Bill, the union questions Meta’s efforts to prevent sexual harassment and ensure that employees with protected characteristics are not negatively impacted by the changes.
The letter from the union follows Zuckerberg’s recent comments on a podcast, where he discussed the need for more “masculine energy” in the workplace. Meta has been approached for comment on these concerns.
Chinese engineers are developing artificial intelligence chips for use in “advanced weapons systems” and have been granted access to cutting-edge British technology, as reported by the Guardian.
Moore Thread and Viren Technology, described as “China’s leading AI chip designers,” have been subject to U.S. export controls for their chip development. It is noted that the technology can provide artificial intelligence capabilities for the advancement of weapons of mass destruction, advanced weapons systems, and high-tech surveillance applications that raise national security concerns.
Before being blacklisted in the US in 2023, the companies had a broad license with UK-based Imagination Technologies, known for its expertise in designing advanced microchips essential for AI systems.
Imagination Technologies, a representative of the UK technology industry, denied intentionally trying to relocate its cutting-edge secrets to China. Representatives from Imagination confirmed the existence of licenses to Moore Thread and Viren Technology.
Allegations have arisen regarding Imagination’s partnerships with Chinese companies and the potential risks of knowledge transfer. Tensions between business with China and national security concerns have been highlighted by these developments.
Since 2020, at least three Chinese companies have obtained licenses to use Imagination’s chip designs, raising concerns about the potential misuse of intellectual property.
Imagination has worked closely with Apple in the past, contributing to the development of iPhone chips. However, concerns have been raised about the risks of sharing too much of its intellectual property with Chinese companies.
The acquisition of Imagination by a Chinese-backed buyer in 2017 raised further concerns about technology transfer and national security implications.
Imagination’s arrangements with Chinese customers are considered “totally normal” and have been described as limited in scope, duration, and usage rights.
Imagination’s policy of not doing business with companies on the US government’s Entity List raises questions about the termination of licenses granted to Chinese companies in October 2023.
Moore Thread and Biren Technology, two Chinese chipmakers, have faced scrutiny for their development of GPUs for AI systems with potential ties to Imagination’s technology.
Funding for Biren Technology comes from the Russia-China Investment Fund, sparking concerns about deepening alliances between China and Moscow in the tech industry.
TThe history of video games is, in some ways, a history of subtle iterations of other people’s ideas. The interstellar success of Taito’s Space Invaders spawned an entire shooter genre, with titles like Galaxian, Phoenix, and Golf taking the basic idea and adding new features. Then in 1984, Karate Champ started the fighting game craze, and Tetris gave us the falling object puzzle game. This is how things have always worked. Adapt, expand, and pass the baton. However, there is a subtle but deep gulf between imitation and inspiration, and not every title can cross it.
Chinese mega-publisher NetEase’s latest live service game, Marvel Rivals, is an Overwatch featuring Marvel characters. It’s more than just an elevator pitch. that’s right What is it? Colorful cartoon characters with varying skills gather in a series of sci-fi arenas for team-based combat in a handful of play modes. The Punisher, a vanilla guy with a machine gun, is Soldier 76 from Overwatch with a Bastion flavor. God-like healer Adam Warlock is a male Mercy. And the Hulk, as a fist-thumping tank, is just rampaging through Winston, the hairless gorilla. Also provides gaming site GamesRadar handy guide Show players which Marvel cast members most resemble their Overwatch favorites.
Marvel’s rival. Photo: Game Press
Many of the genre’s well-worn tropes and abilities have at least been remixed to suit the Marvel universe, and playing as these familiar legends adds an undeniable charm. From bludgeoning enemies with Thor’s hammer to sending exploding acorns flying as Squirrel Girl to slamming Captain America’s shield into Black Panther’s body armor, Rivals captures the comic dynamics of this famous cast perfectly. so much so that the large-scale skirmish seems like the most exciting scene in the movie. X-Men ’97 cartoon. It’s also great that all 33 heroes are available for free from the beginning. Of course, there’s also the Store and Battle Pass, but for now these only give you alternative costumes, emotes, and other accessories. And completing daily missions and seasonal story objectives will give you currency to buy this kind of stuff without paying a penny.
Additionally, the game has a big new feature, Team-Ups, which unlock additional hero abilities when at least two players on the same side select complementary characters. There’s a symbiote bond between Venom, Spider-Man, and Peni Parker that allows the latter two to channel the former’s alien powers, and allows Hela to heal and resurrect Thor and Loki in Ragnarok: Rebirth. I can do that. Kinship can greatly facilitate tactical play.
Marvel’s rival. Photo: NetEase Games
But Rivals in many ways reflects key tenets of the bible of hero shooter design. In other words, for every positive there is always a negative. The sheer number of Marvel’s super freaks and their team-up powers make the game feel very unbalanced at times. Characters like Storm and Iron Man are difficult to counter when they can stay in the air for the entire match, picking off enemies from a distance and avoiding most of the incoming gunfire. Big guys like Venom and Moon Knight tend to completely dominate the area they’re fighting, often at the expense of melee-based combatants who need to get close to deal significant damage. I never expected Wolverine to become one of the most nuanced and sophisticated characters in Marvel’s cast, but here we are.
This game is definitely luxurious in both look and feel. The user interface design regarding the menu system and information screens is excellent. Destructible locations shine in detail. And the characters are also beautifully reproduced. However, here too there are drawbacks. Amidst the chaos of a superhero riot, with explosions, magical attacks, and “hilarious” banter all at the same time, figure out what you’re hurting and what you’re hurting instead. It’s difficult. you Until it’s too late.
These characters will definitely receive buffs and nerfs in time to even out the balance, and players will begin to learn how to combine team members more strategically. But even if the balance issue were resolved, what we’re left with is the equivalent of a changeling in video game folklore, designed to trap those who loved the original. A supernaturally accurate replacement. The question is, can you really blame Rivals for getting too close to Overwatch and potentially getting a restraining order? As the failed hero shooters Hyena, Concord, and xDefiant recently demonstrated, the brutal economics of the live service market demand absolute loyalty to established norms. It’s also fine to tag large global licenses.
Rivals, like many other highly polished and highly focused franchise expansions, is entertaining, gorgeous, and well-made. However, its presence bodes ill for the mainstream gaming industry and the people who work in it. To be successful, especially in the live services sector (where there is a lot of investment), he says, there is no need to expand or challenge other genres. All you have to do is flip a few low denomination coins based on your innovation concept, replicate it and refranchise it. On the other hand, studios that launch new ideas and original characters are doomed to failure. Millions of dollars are lost, jobs are lost, and the game is over.
Rivals is packed with Stan Lee superheroes, but its message about the game’s all-out Funko Pop-ification is as dark as a Charles Burns graphic novel.
JJust by clicking on the “shiny babe” filter, the teenager’s face was subtly elongated, her nose was streamlined, and her cheeks were sprinkled with freckles. Then, she used the Glow Makeup filter to remove blemishes from her skin, make her lips look like rosebuds, and extend her eyelashes in a way that makeup can’t. On the third click, her face returned to reality.
Today, hundreds of millions of people use beauty filters to change the way they look on apps like Snapchat, Instagram, and TikTok. This week TikTok announced new global restrictions on children’s access to products that mimic the effects of cosmetic surgery.
The publication researched the feelings of around 200 teens and their parents in the UK, US, and several other countries and found that girls reported “feelings of low self-esteem” as a result of their online experiences. The announcement was made after it was discovered that the patient was sensitive to
There are growing concerns about the impact of rapidly advancing technology on health, with generative artificial intelligence enabling what has been called a new generation of “micropersonality cults.” This is no small thing. TikTok has around 1 billion users.
Upcoming research by Professor Sonia Livingstone, Professor of Social Psychology at the London School of Economics, will show that the pressures and social comparisons that result from the use of increasingly image-manipulated social media are more psychologically traumatic than viewing violence. They would argue that it can have major health implications. .
TikTok effect filters (left to right): Original image without filter, Bold Glamor, BW x Drama Rush by jrm, and Roblox Face Makeup. Synthesis: Tiktok
Hundreds of millions of people use alternate reality filters on social media every day, from cartoon dog ears to beauty filters that change the shape of your nose, whiten your teeth, and enlarge your eyes.
Dr Claire Pescot, an educationist at the University of South Wales who has studied children aged 10 and 11, agreed that the impact of online social comparisons is being underestimated. In one study, children who were dissatisfied with their appearance said, “I wish I had put on a filter right now.”
“There is a lot of education going on about internet safety, about protecting yourself from pedophiles and catfish. [using a fake online persona to enable romance or fraud]” she said. “But in reality, the dangers are mutual. Comparing yourself to others has more of an emotional impact.”
But some people resist restrictions on the influence they feel is a fundamental part of their online identity. Olga Isupova, a Russian digital artist living in Greece who designs beauty filters, called such a move “ridiculous.” She added that having an adapted face is a necessary part of being “multiple people” in the digital age.
“People live normal lives, but it’s not the same as their online lives,” she said. “That’s why you need a straightened face for your social media life. For many people, [online] It’s a very competitive field and it’s about Darwinism. Many people use social media not just for fun, but also as a place to make money and improve their lives and futures. ”
In any case, age restrictions on some of TikTok’s filters are unlikely to solve the problem anytime soon. 1 in 5 8 to 16-year-olds lie about being over 18 on a social media app. the study Rules tightening age verification will not come into force until next year, Britain’s communications regulator Ofcom has found.
A growing body of research shows that some beauty filters are dangerous for teenagers. Last month, a small survey was conducted among female students in Delhi who use Snapchat. Found Most people report “lower self-esteem and feelings of inadequacy when juxtaposing their natural appearance with filtered images.” A study conducted in 2022 found that the opinions of more than 300 Belgian adolescents who were found to use face filters were associated with the likelihood of accepting the idea of cosmetic surgery.
“Kids who are more resilient look at these images and say, oh, this is a filter, but kids who are more vulnerable tend to feel bad when they see it,” Livingstone said. “There is growing evidence that teenage girls feel vulnerable about their appearance.”
When TikTok’s research partner Internet Matters asked a 17-year-old in Sweden about beauty filters, she replied: The effect should be more similar. ”
Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Laboratory, said more experimental research is needed into the social and psychological effects of the most extreme beauty filters.
In 2007, he helped coin the term “Proteus Effect.” This is a term that describes how people’s behavior changes to match their online avatar. People wearing more attractive virtual selves disclosed more about themselves than those wearing less attractive virtual selves.
“We need to strike a careful balance between regulation and welfare concerns,” he said. “Small changes to our virtual selves can quickly become tools we rely on, such as the ‘touch-up’ feature in Zoom and other video conferencing platforms. ”
In response, Snapchat said it doesn’t typically receive feedback about the negative impact its “beauty lenses” have on self-esteem.
Meta, the company behind Instagram, said it walks a fine line between safety and expression through augmented reality effects. The company said it consulted with mental health experts and banned filters that directly encourage cosmetic surgery, such as mapping surgical lines on a user’s face or promoting the procedure.
TikTok has made a clear distinction between effects such as animal ear filters and effects designed to change one’s physical appearance, with teens and parents voicing concerns about “appearance” effects. said. In addition to the restrictions, it said it would raise awareness among those making filters about “some of the unintended consequences that certain effects can cause.”
Teenagers are facing new restrictions on beauty filters on TikTok that are aimed at addressing concerns about increasing anxiety and decreasing self-esteem.
In the near future, users under 18 will not be able to use filters that artificially alter features like enlarging eyes, plumping lips, or changing skin color.
Filters such as “Bold Glamor” that significantly alter a user’s appearance will be affected, while simple comic filters like bunny ears or dog noses will remain available. The changes were announced by TikTok during a safety forum at its European headquarters in Dublin.
Despite these restrictions, the effectiveness depends on users accurately providing their age on the platform.
Beauty filters on TikTok, whether provided by the platform or created by users, are a source of concern as they pressure teenagers, especially girls, to conform to unrealistic beauty standards and can lead to negative emotional impacts. Some young users have reported feeling insecure about their real appearance after using filters.
TikTok will also enhance its systems to prevent users under 13 from accessing the platform, potentially resulting in the removal of thousands of underage British users. An automated age detection system using machine learning will be piloted by the end of the year.
These actions come in response to stricter regulations on minors’ social media use under the Online Safety Act in the UK. TikTok already deletes millions of underage accounts globally each quarter.
Chloe Setter, head of public policy for child safety at TikTok, stated that they aim for faster detection and removal of underage users, understanding that this might be inconvenient for some young people.
Ofcom’s report from last December highlighted TikTok’s removal of underage users and raised concerns about the effectiveness of age verification enforcement. TikTok plans to implement a strict age limit of 13+ for social media users next summer.
Social media platforms will introduce new rules regarding beauty filters and age verification, anticipating stricter regulations on online safety in the future. These adjustments are part of broader efforts to enhance online safety.
Other platforms like Roblox and Instagram are also implementing measures to enhance child safety, reflecting a growing concern about the impact of social media on young users.
Andy Burrows, CEO of the Molly Rose Foundation, emphasized the importance of transparent age verification measures and the need to address harmful content promoted on social media platforms.
The NSPCC welcomed measures to protect underage users but stressed the need for comprehensive solutions to ensure age-appropriate experiences for all users.
An influential Saudi dissident who collaborated closely with Jamal Khashoggi was harmed in a security breach of the company, then known as Twitter Inc., by Saudi officials in 2014, as stated by a U.S. appeals court. In response, the company mentioned taking further legal action against the dissident.
Personal details about Canadian resident Omar Abdulaziz, a vocal critic of Saudi Arabia’s crown prince, Mohammed bin Salman, were exposed to Riyadh by a Twitter employee who used an anonymous account to access Abdulaziz’s information. This information was later acquired by the Saudi government to silence Abdulaziz’s criticisms.
The breach, dating back almost a decade and involving around 6,000 accounts, was uncovered in 2018 and had severe repercussions for Abdulaziz, including the incarceration of his family in Saudi Arabia. Saudi operatives also obtained Abdulaziz’s phone number, which was exploited by the Saudis. Citizen Lab researchers later revealed that Abdulaziz was targeted using NSO Group spyware while he was in close contact with Khashoggi, who was tragically killed a few months later.
Abdulaziz is currently facing challenges with both Twitter and X Company, owned by Donald Trump’s adviser Elon Musk.
Despite a recent appeals court ruling dismissing Abdulaziz’s lawsuit against the social media platform for negligence in preventing Saudi operatives from accessing his account due to not meeting California’s statute of limitations requirements, the court did recognize that Abdulaziz had standing to pursue the lawsuit based on alleged harm caused by the company’s actions. In light of this development, Abdulaziz intends to seek a review of the case where the court could reconsider its decision. Twitter claimed at the time that it was a “victim” of employee misconduct.
This incident highlights the ongoing threats faced by activists and critics of authoritarian governments who are subjected to harassment, surveillance, and violence, even in countries like the United States and Canada that were once considered safe havens. This trend has now spread to countries such as Saudi Arabia, Iran, the United Arab Emirates (UAE), and India.
In 2020, The Guardian reported that Abdulaziz had been alerted by Canadian authorities about being a potential target for Saudi Arabia, advising him to take precautions to ensure his safety.
Ronald Deibert of Citizen Lab at the University of Toronto’s Munk School expressed concerns about the Trump administration’s potential impact on cross-border repression. He warned that advancements made in regulating tools used for repression could be reversed, posing a significant risk to civil society.
In 2021, the Biden administration blacklisted Israel’s NSO Group due to concerns about the spread of its surveillance software and its threat to U.S. national security. However, NSO lobbyists are actively trying to reverse this classification through the Department of Commerce.
One prominent example of cross-border crackdowns on U.S.-linked dissidents was the brutal killing of Khashoggi in 2018. Following the murder, the U.S. imposed sanctions against several individuals, with President Biden later releasing an intelligence report implicating Prince Mohammed in the murder.
Abdulaziz stressed the importance of holding companies accountable for their users’ safety in a statement to The Guardian. No one should suffer due to a company’s failure to protect against hacks.
The Guardian did not respond to requests for comments from X.
Following Musk, X’s primary investor is a company led by Saudi billionaire Prince Alwaleed bin Talal, who himself faced imprisonment by the Saudi government in 2017. Despite not leaving Saudi Arabia or the UAE since then, Prince Alwaleed recently met with X CEO Linda Yaccarino to underscore the strong ties between X and his company Kingdom Holding, partially owned by Saudi Arabia.
During a visit to the Middle East, Yaccarino also met with Dubai’s leader Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum. UK court findings revealed that Sheikh Maktoum’s agents used NSO spyware to target the phones of his ex-wife and her legal team in 2021.
isAs Evie was scrolling through X in April, she saw some unwelcome posts in her feed. One was a photo of a visibly skinny person asking if they were skinny enough. Another post wanted to compare how few calories users were consuming in a day.
Debbie, who did not want to give her last name, is 37 and was first diagnosed with bulimia when she was 16. She did not follow either of the accounts behind the posts in the group, which has more than 150,000 members on the social media site.
Out of curiosity, Debbie clicked on the group. “As I scrolled down, I saw a lot of pro-eating disorder messages,” she said. “People asking for opinions about their bodies, people asking for advice on fasting.” A post pinned by an admin urged members to “remember why we’re starving.”
observer Twitter found seven more groups, totaling around 200,000 members, openly sharing content promoting eating disorders. All of the groups were created after Twitter was bought by billionaire Elon Musk in 2022 and rebranded as X.
Eating disorder campaigners said the scale of harmful content showed a serious failure in moderation by X. Councillor Wera Hobhouse, chair of the cross-party parliamentary group on eating disorders, said: “These findings are extremely worrying… X should be held accountable for allowing this harmful content to be promoted on its platform, which puts so many lives at risk.”
The internet has long been a hotbed of content promoting eating disorders (sometimes called “pro-ana”), from message boards to early social media sites like Tumblr and Pinterest, which banned posts promoting eating disorders and self-harm in 2012 following outcry over their prevalence.
Debbie remembers internet message boards in support of Anna, but “I had to search to find them.”
This kind of content is now more accessible than ever before, and critics of social media companies say it is pushed to users by algorithms, resulting in more and sometimes increasingly explicit posts.
Social media companies have come under increasing pressure in recent years to step up safety measures following a series of deaths linked to harmful content.
At an inquest into the death of 14-year-old Molly Russell, who died by suicide in 2017 after viewing suicide and self-harm content, the coroner ruled that online content contributed to her death.
Two years later, in 2019, Mehta-owned Instagram announced it would no longer allow any explicit content depicting self-harm. The Online Safety Act passed last year requires tech companies to protect children from harmful content, including advertising eating disorders, and will impose heavy fines on violators.
Baroness Parminter, who sits on the cross-party group, said the Online Safety Act was a “reasonable start” but failed to protect adults. “The obligations on social media providers only cover content that children are likely to see – and of course eating disorders don’t stop when you turn 18,” she said.
In the user policy, X We do not allow content that encourages or promotes self-harmwhich explicitly includes eating disorders. Users can report violations of X’s policies and posts, as well as use filters in the timeline to report that they are “not interested” in the content being served.
But concerns about a lack of moderation have grown since Musk took over the site: Just weeks later, in November 2022, he fired thousands of staff, including moderators.
Musk also brought changes to X that meant users would see more content from accounts they didn’t follow. The platform introduced a “For You” feed, which became the default timeline.
in Last year’s blog postAccording to the company, about 50% of the content that appears in this feed comes from accounts that the user doesn’t yet follow.
In 2021, Twitter launched “Communities” as an answer to Facebook Groups. Communities have become more prominent since Musk became CEO. In May, Twitter announced that “Your timeline will now show recommendations for communities you might enjoy.”
In January, Meta, a rival to X, which owns Facebook and Instagram, said it would continue to allow the sharing of content documenting struggles with eating disorders but would no longer encourage it and make it harder to find. While Meta began directing users searching for eating disorder groups to safety resources, X does not show any warnings when users are looking for such communities.
Debbie said she found X’s harmful content filtering and reporting tools ineffective, and shared screenshots of the group’s posts with the posters. observer Even after she reported it and flagged it as not relevant, the post continued to appear in her feed.
Mental health activist Hannah Whitfield deleted all of her social media accounts in 2020 to aid in her recovery from an eating disorder. She said she then returned to some sites, including X, where “thinspiration” posts glorifying unhealthy weight loss appeared in her For You feed. [eating-disorder content] The downside of X was that it was a lot more extreme and radical. Obviously it was a lot less moderated and I felt it was a lot easier to find something very explicit.”
Eating disorder support groups stress that social media does not cause eating disorders, and that people who post pro-eating disorder content are often unwell and do not mean any harm, but social media can lead people who are already struggling with eating disorders down a dark path.
The authors, who analysed two million eating disorder posts on X, said the platform offers people with illnesses a “sense of belonging”, but that unmoderated communities can become “toxic echo chambers that normalise extreme behaviour”.
Paige Rivers was first diagnosed with anorexia when she was 10. Now 23 and training to be a nurse, she came across content about eating disorders on XFeed.
Rivers said he found the X setting, which allows users to block certain hashtags or phrases, was easily circumvented.
“People started using weird hashtags like anorexia, which is a combination of numbers and letters, and that got through,” she said.
Tom Quinn, Director of External Relations Eating disorder charity Beat“The fact that these so-called ‘pro-ana’ groups are allowed to proliferate demonstrates an extremely worrying lack of moderation on platforms like X,” it said.
For those in recovery, like Debbie, social media held the promise of support.
But Debbie feels powerless to limit it, and her constant exposure to provocative content is backfireing: “It discourages me from using social media, and it’s really sad because I struggle to find people in a similar situation or who can give me advice about what I’m going through,” she says.
Company X did not respond to a request for comment.
The Royal Society is facing pressure to remove technology mogul Elon Musk from its membership due to concerns about his behavior.
As reported by The Guardian, Musk, known for owning the social media platform X, was elected to the British Academy of Sciences in 2018. Some view him as a contemporary innovator comparable to Brunel for his contributions to the aerospace and electric vehicle sectors.
Musk, a co-founder of SpaceX and the CEO of Tesla, has been commended for advancing reusable rocket technology and promoting sustainable energy sources.
Nevertheless, concerns have been raised by several Royal Society fellows regarding Musk’s membership status, citing his provocative comments, particularly following recent riots in the UK.
Critics fear that Musk’s statements could tarnish the reputation of his companies. In response to inquiries, Musk’s companies, including X, provided comments.
Musk’s social media posts during the unrest were widely condemned, with Downing Street rebuking his remarks about civil war and false claims about UK authorities.
The concerns around potentially revoking Musk’s membership focus on his ability to promote his beliefs responsibly and not on his personal views.
The Royal Society’s Code of Conduct emphasizes that fellowship entails upholding certain standards of behavior, even in personal communications, to safeguard the organization’s reputation.
The Code stipulates that breaching conduct rules may result in disciplinary measures, such as temporary or permanent suspension. Specific procedures are outlined if misconduct allegations are raised against a Fellow or Foreign Member.
Expelling a member from the Royal Society is rare, with no records of such action in over a century. Previous controversies included a dean resigning over remarks about teaching creationism in schools.
A Royal Society spokesperson assured that any concerns regarding individual Fellows would be handled confidentially.
Recently, ChildLine counselors have been receiving an alarming number of calls regarding a specific issue.
In one case, a 17-year-old boy reached out for help after being blackmailed for sending intimate images to someone he thought was his age. This type of sextortion, driven by financial motives, is becoming more prevalent among UK teenagers.
Childline supervisor Rebecca Hipkiss revealed that these incidents have increased significantly over the past year, with over 100 cases reported. Victims often feel embarrassed and fear the repercussions of having their personal images shared with their friends and family.
Childline, operated by the NSPCC children’s charity, offers a “Report Remove” service to help victims of sexual blackmail take control of their images online. The service creates a digital fingerprint of uploaded images to prevent them from being circulated on major platforms.
With the rise of sophisticated AI tools, teenagers are now facing threats of deepfake content being created using their photos. These fake images are then used to extort money from victims, causing significant distress.
Victims of sex blackmail often feel helpless and worried about the consequences of these incidents. Childline advises them not to pay the scammers and to report the extortion attempts to the authorities.
It’s crucial for teens to be cautious and set boundaries in their online interactions. Understanding the risks and knowing when to say no are essential in protecting themselves from falling victim to such scams.
While the number of fires so far is typical for this time of summer, the extreme heat of early summer has dried out the land, increasing the risk of wildfires and casting a major doubt over what had seemed a relatively bright season.
“Wildfire conditions across the West continue to worsen and unfortunately will get worse,” Daniel Swain, a climate scientist at the University of California, Los Angeles and the National Center for Atmospheric Research, said at a briefing on Thursday. “The past 30 days have been the warmest on record across a significant portion of California and the West.”
Flames from the Thompson Fire in Oroville, California, on July 2. Tayfun Coskun/Anadolu via Getty Images file A vehicle is engulfed in flames during the Thompson Fire in Oroville, California on July 2nd. Ethan Swope/AP Files
After California experienced two consecutive wet winters, the National Association of Fire Agencies had predicted moderate fire activity in the state this summer and fall. This month's seasonal forecast has been revised upwards.He said the grass that had grown tall during the rainy weather had bounced back quickly with the heat.
“You know, we've had two really great winters where the atmospheric river came in and saved California from drought, but the tradeoff is that now we have a ton of grass and shrubs that are dead and ready to burn,” said Caitlin Trudeau, a senior scientist at nonprofit research organization Climate Central.
Debris of buildings and vehicles are left behind as the Apache Fire burns in Palermo, California, on June 25. Ethan Swope/AP Files Firefighters work to put out the growing Post Fire in Gorman, California on June 16. Eric Thayer/AP File
Swain said recent outbreaks of “dry lightning” – thunderstorms that don't produce rain – were of particular concern because long-range forecasts showed another heat wave hitting the region in late July, which could exacerbate existing fires.
A recent analysis from satellite monitoring company Maxar suggests that soil moisture levels in California dropped sharply from early June through July 15, while temperatures over the same period were about 5 degrees Fahrenheit warmer than in 2020.
That year, it was June in California. Drought outlook and wildfire risk profile similar to this year. after that, More than 10,000 lightning strikes hit California Dozens of fires broke out over a three-day period in mid-August. Fueled by a heat wave, many of these fires grew rapidly and eventually evolved into three complex fires. One of these, the August Complex Fire, primarily affected the Mendocino National Forest and burned more than one million acres.
A total of 33 people have died in California's 2020 wildfires and scorched 4.5 million acres.
“It's really concerning to see these statistics because we're only halfway through July, and the last major thunderstorms of 2020 were in August,” Trudeau said of the data early this year. “We're already starting to see dry thunderstorms. We still have a long way to go to close out the year.”
Across the U.S., more than 1 million acres have burned so far this year, with 54 major fires currently under containment, according to the National Joint Fire Center.
Wildfire season is off to an early and active start in the Pacific Northwest, particularly in Oregon, with several large blazes burning in remote areas.
Smoke rises from a wildfire near La Pine, Oregon on June 25. Kyle Kalambach/Deschutes County Sheriff's Office via AP File
Leading up to the Paris Olympics, athletes are raising concerns about the scorching summer temperatures and the impact of climate change on their competitiveness and safety in sports.
In a recent report by climate advocacy and sports organizations in the UK and US, 11 athletes have highlighted the environmental challenges at the upcoming Olympics and the long-term implications for sporting competitions in a warming world.
Report The average temperature in Paris during the Olympics is projected to be over 5.5 degrees higher than in 1924, the last time the city hosted the event.
Athlete Jamie Ferndale, a former GB Olympic rugby sevens team member, expressed concerns about the extreme heat in Paris potentially affecting athletes’ performance.
“When temperatures reach 30 to 35 degrees Celsius, it becomes quite dangerous,” Ferndale said. “With six games in three days, athletes don’t have enough time to cool off between matches.”
Olympic organizers are taking measures to combat the heat, scheduling events strategically to minimize exposure and implementing heat-response tools to ensure safety, according to an International Olympic Committee spokesperson.
Local organizers for Paris 2024 have stated that France’s meteorological service will be closely monitoring temperatures, and adjustments to competition dates can be made if needed. Free water will also be available to spectators to help combat the heat.
Paris, being one of the European capitals most vulnerable to heat waves, is focusing on reducing the carbon footprint of the Olympics. Geothermal cooling and natural ventilation will be used in the athletes’ village, which will serve as permanent housing post-Olympics.
Athlete Pragnya Mohan raised concerns about the lack of air conditioning potentially affecting athletes’ recovery rate. However, the Paris 2024 committee assured that temperatures in athlete accommodations would be significantly lower than outside, and portable cooling units would be available for rent.
Discus thrower Sam Mathis questioned the feasibility of hosting Summer Olympics during the hottest times of the year, considering the challenges posed by extreme heat. Research has shown that heat-related illnesses have affected athletes in past Olympics, prompting the need for climate-conscious measures.
Investments in sustainable practices, such as cleaning up the Seine, building bike lanes, and planting shade trees, demonstrate Paris’ commitment to reducing emissions and adapting to future climate challenges.
Deputy Mayor Emmanuel Gregoire stressed the urgency of making changes to protect people from the dangers of extreme heat in everyday life.
During its annual developers conference on Monday, Apple introduced Apple Intelligence, an eagerly anticipated artificial intelligence system designed to personalize user experiences, automate tasks, and, as CEO Tim Cook assured, set “a new standard of privacy in AI.”
Although Apple emphasizes that its AI prioritizes security, its collaboration with OpenAI has faced criticism. The service, launched in November 2022, has raised privacy concerns by collecting user data for model training without explicit consent. Users will have the option to opt out of this data collection starting in April 2023.
Apple has assured that its collaboration with ChatGPT will be limited to specific tasks with explicit user consent, but security experts remain vigilant about how these concerns will be addressed.
Late to the game in generative AI, Apple has trailed behind competitors like Google, Microsoft, and Amazon, whose AI ventures have boosted their stock prices. Apple has refrained from integrating generative AI into its main consumer products.
Apple aims to apply AI technology responsibly, building Apple Intelligence products over several years using proprietary technology to minimize user data leakage from the Apple ecosystem.
AI, which requires vast data to train language models, poses a challenge to Apple’s focus on privacy. Critics like Elon Musk argue that it’s impossible to balance AI integration and user privacy. However, some experts disagree.
“By pursuing privacy-focused strategies, Apple is leading the way for businesses to reconcile data privacy with innovation,” said Gar Ringel, CEO of a data privacy software company.
Many recent AI releases have been criticized for being dysfunctional or risky, reflecting Silicon Valley’s “move fast and break things” culture. Apple seems to be taking a more cautious approach.
According to Steinhauer, “Historically, platforms release products first and address issues later. Apple is proactively tackling common concerns. This illustrates the difference between designing security measures upfront versus addressing them reactively, which is always less effective.”
Central to Apple’s AI privacy measures is its new private cloud computing technology. Apple intends to conduct most computing internally for Apple Intelligence features on devices. For tasks requiring more processing power, the company will outsource to the cloud while safeguarding user data.
To achieve this, Apple will only share the data necessary for each request, implement additional security measures at endpoints, and avoid long-term data storage. Apple will also open tools and software related to its private cloud for third-party verification.
Private cloud computing represents a significant advancement in AI privacy and security, according to Krishna Visnubotra, VP of product strategy at Zimperium. The independent audit component is particularly noteworthy.
The EU has given TikTok 24 hours to conduct a risk assessment of a new service it has launched over concerns it could encourage children to become addicted to videos on the platform.
Launched this month in France and Spain, TikTok Lite, an app that lets you earn rewards just by watching, is effectively TikTok’s coin currency that rewards points earned through Amazon coupons, gift cards via PayPal, or “tasks.” We offer prizes such as:
“Tasks” include watching videos, liking content, following creators, inviting friends to TikTok, and more.
The European Commission said TikTok, owned by China’s ByteDance, should have carried out a risk assessment before introducing the app, and said it was now seeking “further details”.
The intervention comes months after sweeping new laws came into force under the Digital Services Act (DSA), which requires technology companies and social media platforms to follow new rules regarding the services they offer to users and the removal of illegal content. It was done later.
In February, the commission launched a formal investigation into TikTok, alleging violations of the DSA in areas related to the protection of minors, advertising transparency, and risk management around addictive design and harmful content. We evaluated whether there is any gender.
Investigations into child protection on TikTok include age verification, an issue highlighted by a Guardian investigation into the platform last year.
While the commission said its request for further information regarding TikTok’s internal controls does not prejudge the possibility of further action, the commission said in response to the request that “any information that is inaccurate, incomplete, or misleading” We have the power to impose fines.”
The organization said its request related to concerns “about the potential impact of the new Tasks and Rewards Lite program on the protection of minors and the mental health of users, particularly in relation to the potential stimulation of addictive behavior.” He said that
Last year, US Surgeon General Vivek Murthy formally warned the nation that social media poses a “risk of serious harm” to the mental health of children and adolescents.
In September, TikTok was fined 350 million euros by the EU’s chief regulator for violating privacy laws regarding the processing of children’s personal data.
In addition to the 24-hour deadline for the risk assessment, TikTok must also provide other information by April 26, the commission said.
The company said it would honor the request. “We have already been in direct contact with the commission regarding this product and will respond to requests for information,” a TikTok spokesperson said.
The company said the benefit is limited to people aged 18 and over, subject to age verification, and the maximum payment is set at €1 (approximately £0.85) per day.
Following a dry run of Taiwan’s presidential election this year, China is anticipated to disrupt elections in the United States, South Korea, and India with artificial intelligence-generated content, as warned by Microsoft.
The tech giant predicts that Chinese state-backed cyber groups will target high-profile elections in 2024, with North Korea also getting involved, according to a report released by the company’s threat intelligence team.
“As voters in India, South Korea, and the United States participate in elections, Chinese cyber and influence actors, along with North Korean cyber attack groups, are expected to influence these elections,” Microsoft mentioned.
Microsoft stated that China will create and distribute AI-generated content through social media to benefit positions in high-profile elections.
Although the immediate impact of AI-generated content seems low in swaying audiences, China is increasingly experimenting with enhancing memes, videos, and audio, potentially being effective in the future.
During Taiwan’s presidential election in January, China attempted an AI-powered disinformation campaign for the first time to influence a foreign election, Microsoft reported.
The Beijing-backed group Storm 1376, also known as Spamoflage or Dragonbridge, heavily influenced Taiwan’s elections with AI-generated content spreading false information about candidates.
Chinese groups are also engaged in influencing operations in the United States, with Chinese government-backed actors using social media to probe divisive issues among American voters.
In a blog post, Microsoft stated, “This may be to collect intelligence and obtain accurate information on key voting demographics ahead of the US presidential election.”
The report coincides with a White House board’s announcement of a Chinese cyber operator infiltrating US officials’ email accounts due to errors made by Microsoft, as well as accusations of Chinese-backed hackers conducting cyberattacks targeting various entities in the US and UK.
OpenAI’s latest tool can create an accurate replica of someone’s voice with just 15 seconds of recorded audio. This technology is being used by AI Labs to address the threat of misinformation during a critical global election year. However, due to the risks involved, it is not being released to the public in an effort to limit potential harm.
Voice Engine was initially developed in 2022 and was initially integrated into ChatGPT for text-to-speech functionality. Despite its capabilities, OpenAI has refrained from publicizing it extensively, taking a cautious approach towards its broader release.
Through discussions and testing, OpenAI aims to make informed decisions about the responsible use of synthetic speech technology. Selected partners have access to incorporate the technology into their applications and products after careful consideration.
Various partners, like Age of Learning and HeyGen, are utilizing the technology for educational and storytelling purposes. It enables the creation of translated content while maintaining the original speaker’s accent and voice characteristics.
OpenAI showcased a study where the technology helped a person regain their lost voice due to a medical condition. Despite its potential, OpenAI is previewing the technology rather than widely releasing it to help society adapt to the challenges of advanced generative models.
OpenAI emphasizes the importance of protecting individual voices in AI applications and educating the public about the capabilities and limitations of AI technologies. The voice engine is watermarked to enable tracking of generated voices, with agreements in place to ensure consent from original speakers.
While OpenAI’s tools are known for their simplicity and efficiency in voice replication, competitors like Eleven Labs offer similar capabilities to the public. To address potential misuse, precautions are being taken to detect and prevent the creation of voice clones impersonating political figures in key elections.
AI program Sora generated this video featuring an android based on text prompts
Sora/OpenAI
OpenAI has announced a program called Sora, a state-of-the-art artificial intelligence system that can turn text descriptions into photo-realistic videos. This video generation model has added to excitement over advances in AI technology, along with growing concerns about how synthetic deepfake videos will exacerbate misinformation and disinformation during a critical election year around the world. I am.
Sora AI models can currently create videos up to 60 seconds using text instructions alone or a combination of text and images. One demonstration video begins with a text prompt describing a “stylish woman walking down a Tokyo street filled with warmly glowing neon lights and animated city signs.” Other examples include more fantastical scenarios such as dogs frolicking in the snow, vehicles driving down the road, and sharks swimming through the air between city skyscrapers.
“Like other technologies in generative AI, there is no reason to believe that text-to-video conversion will not continue to advance rapidly. We are increasingly approaching a time when it will be difficult to tell the fake from the real.” Honey Farid at the University of California, Berkeley. “Combining this technology with AI-powered voice cloning could open up entirely new ground in terms of creating deepfakes of things people say and do that they have never actually done.”
Sora is based on some of OpenAI's existing technologies, including the image generator DALL-E and the GPT large language model. Although his text-to-video AI models lag somewhat behind other technologies in terms of realism and accessibility, Sora's demonstrations are “orders of magnitude more believable and cartoon-like” than previous ones. “It's less sticky,” he said. Rachel TobackHe is the co-founder of SocialProof Security, a white hat hacking organization focused on social engineering.
To achieve this higher level of realism, Sora combines two different AI approaches. The first is a diffusion model similar to those used in AI image generators such as DALL-E. These models learn to gradually transform randomized image pixels into a consistent image. The second of his AI techniques is called “Transformer Architecture” and is used to contextualize and stitch together continuous data. For example, large-scale language models use transformer architectures to assemble words into commonly understandable sentences. In this case, OpenAI split the video clip into visual “space-time patches” that Sora's transformer architecture could process.
Sora's video still contains many mistakes, such as a walking person's left and right feet swapping positions, a chair floating randomly in the air, and a chewed cookie magically leaving no bite marks. contained. still, jim fanThe senior research scientist at NVIDIA praised Sora on social media platform X as a “data-driven physics engine” that can simulate the world.
The fact that Sola's video still exhibits some strange glitches when depicting complex scenes with lots of movement suggests that such deepfake videos are still detectable for now. There is, he says. Arvind Narayanan at Princeton University. But he also warned that in the long term, “we need to find other ways to adapt as a society.”
OpenAI has been holding off on making Sora publicly available while it conducts “red team” exercises in which experts attempt to break safeguards in AI models to assess Sora's potential for abuse. An OpenAI spokesperson said the select group currently testing Sora are “experts in areas such as misinformation, hateful content, and bias.”
This test is very important. Because synthetic videos allow malicious actors to generate fake footage, for example, to harass someone or sway a political election. Misinformation and disinformation fueled by AI-generated deepfakes ranks as a major concern For leaders as well as in academia, business, government, and other fields. For AI experts.
“Sora is fully capable of creating videos that have the potential to deceive the public,” Tobac said. “Videos don't have to be perfect to be trustworthy, as many people still don't understand that videos can be manipulated as easily as photos.”
Toback said AI companies will need to work with social media networks and governments to combat the massive misinformation and disinformation that could arise after Sora is released to the public. Defenses could include implementing unique identifiers, or “watermarks,” for AI-generated content.
When asked if OpenAI has plans to make Sora more widely available in 2024, an OpenAI spokesperson said the company “will make Sora more widely available in OpenAI's products.” We are taking important safety measures.” For example, the company already uses automated processes aimed at preventing commercial AI models from producing extreme violence, sexual content, hateful images, and depictions of real politicians and celebrities. .With more people than ever before Participate in elections this yearthese safety measures are extremely important.
The chairman of the U.S. Senate Intelligence Committee, a ranking Democrat, said he is concerned about President Joe Biden’s campaign’s decision to join TikTok.
On Sunday, Biden’s re-election campaign used the Super Bowl to launch a new TikTok account to reach younger voters ahead of November’s presidential election.
The launch of the campaign on TikTok is notable given that the app, owned by Chinese tech company ByteDance, is under review in the United States due to potential national security concerns. Some U.S. lawmakers have called for the app to be banned over concerns that the Chinese government could access user data and influence what people see on the app.
On Monday, Democratic Sen. Mark Warner said he was concerned about the national security implications.
“I think we still need to find a way to follow India, which banned TikTok,” Warner said. “I’m a little worried about the mixed messages.”
Many Republicans have also criticized the campaign’s decision to join TikTok.
White House Press Secretary John Kirby said nothing has changed regarding “national security concerns” regarding the use of TikTok on government devices. That policy continues today. “
Last year, the Biden administration ordered government agencies to remove TikTok from federally owned phones and devices.
TikTok insists it does not share U.S. user data with the Chinese government and has taken substantial steps to protect user privacy. The company did not respond to Reuters’ request for comment.
The Biden campaign said in a statement that it will “continue to meet voters where they are,” including on other social media apps such as Meta’s Instagram and Truth Social, founded by former President Donald Trump.
The campaign has “advanced security measures” in place for its devices and its presence on TikTok is separate from the app’s ongoing security review, campaign officials added.
In March 2023, the U.S. Treasury Department-led Committee on Foreign Investment in the United States (CFIUS) demanded that TikTok’s Chinese owners sell their shares or face the app being banned, but the administration No action was taken.
White House press secretary Karine Jean-Pierre said Monday that a review by CFIUS is underway, filed by Warner and others to give the government new tools to combat threats posed by foreign-owned apps. He noted previous White House support for the bill.
Last month, TikTok told Congress that 170 million Americans now use the short video platform, up from 150 million the year before.
The Guardian confirmed that Meta is considering expanding and “reconsidering” its hate speech policy regarding the term “Zionist.” On Friday, the company contacted and met with more than a dozen Arab, Islamic, and pro-Palestinian groups to discuss plans to review its policies to ensure that “Zionist” is not used as a substitute for Jewish or Israeli. An email seen by the Guardian revealed this information.
According to an email sent by Meta representatives to invited groups, the current policy allows the use of “Zionist” in political discussions as long as it does not refer to Jewish people in an inhumane or violent manner. The term will be removed if it is used explicitly on behalf of or on behalf of Israelis. The company is considering this review in response to recent posts reported by users and “stakeholders,” as reported by The Intercept.
Another organization received an email from a Meta representative stating that the company’s current policy does not allow users to attack others based on protected characteristics and that a current understanding of language people use to refer to others is necessary. The email also mentioned that “Zionist” often refers to the ideology of an unprotected individual but can also refer to Jews and Israelis. The organizations participating in the discussions expressed concerns about the changes leading to further censorship of pro-Palestinian voices.
In addition, Meta gave examples of posts that would be removed, including a post calling Zionists rats. The company has been criticized for unfairly censoring Palestinian-related content, which raises concerns about the enforcement of these policies.
In response to a request for comment, Meta spokesperson Corey Chambliss shared a previous statement regarding the “increasing polarized public debate.” He added that Meta is considering whether and how it can expand its nuanced response to such language and will continue to consult with stakeholders to improve the policy. Policy discussions take place during high-stakes periods of conflict, and accurate information and its dissemination can have far-reaching effects.
More than 25,000 Palestinians have been killed since the attack on Gaza began in October 2023. Implementing a policy like this in the midst of a genocide is extremely problematic, and it may cause harm to the community, as stated by an official from the American Arab Anti-Discrimination Committee.
Do you have any smart devices or home appliances that can be controlled remotely in your house? These devices have become a common feature in modern homes over the past decade, offering convenience but also raising concerns about privacy. These smart devices collect, share, aggregate, and analyze data, posing potential risks to personal information. According to Katherine Kemp, an expert in law and data privacy, privacy laws in Australia are not up to date, which is a global concern. The information collected by smart devices can be used for targeted advertising, and it’s unclear where this data ends up.
While smart devices offer benefits such as environmental friendliness, Kemp believes that their main purpose is to collect and sell more information rather than promoting environmental sustainability. There’s a concern that companies use this data for targeted advertising and other commercial purposes, potentially creating detailed profiles of individuals.
Concerns about privacy and consent models have been raised by Sam Floreani, the policy director at Digital Rights Watch. The collection and use of data depend on underlying incentives, and it’s essential for individuals to fully understand the implications of sharing their data. He also mentioned the need for improving consent laws and rights around personal data.
Australia’s current privacy laws require consent, but customers are not always given the right information to make informed choices. The government is planning an overhaul of the law to bring it into the “digital age” and strengthen enforcement powers for privacy watchdogs.
Convenience and privacy
Some argue that sacrificing privacy for convenience is worth it, especially if it improves accessibility. For the visually impaired community, smart devices play an important role in reducing social isolation. However, concerns remain about the trade-off between convenience and privacy.
“That’s too tempting.”
Early concepts of smart homes focused on collecting data solely for the occupants’ purposes. However, the potential for lucrative behavioral advertising services led to a shift in the use of this data. Changes in privacy laws are needed to establish stricter standards for how companies behave regarding smart devices.
Snapchat’s owner narrowly missed Wall Street’s hopes as it continues to grapple with slowing digital advertising. The social media company’s stock price fell by nearly a third.
Snap said it was “encouraged by our progress,” but cited factors such as the Middle East conflict that had hurt its business.
Snap’s revenue rose 5% to $1.36 billion in the three months ended Dec. 31, missing analysts’ expectations for $1.38 billion. Net loss narrowed from $288 million to $248 million.
Investors remained concerned about the company’s growth. The company expects revenue for the current quarter to be between $1.1 billion and $1.14 billion. Analysts had expected about $1.1 billion.
Snap shares fell 30% to $12.21 in after-hours trading in New York.
Alphabet, owner of Google and YouTube, the world’s two biggest advertisers, and Meta Platforms, owner of Facebook and Instagram, are in a better position. Smaller companies in the market continue to struggle.
Santa Monica, Calif.-based Snap expects to end 2023 with about 414 million daily active users, a number that will rise to 420 million in the first quarter.
The group told investors on Tuesday that it was “shifting our focus to user growth and deepening our engagement in our most profitable regions, including North America and Europe.”
Evan Spiegel, CEO of Snap, said: “2023 was a pivotal year for Snap. We transformed our advertising business and continued to grow our global community, reaching 414 million daily active users.” We have 7 million subscribers who pay for our products.
“Snapchat strengthens our relationships with friends, family and the world, and this unique value proposition has provided a strong foundation on which to build our business for long-term growth.”
The company releases its financial results a day after announcing it would lay off about 10% of its global workforce, or about 530 people, as part of an organizational restructuring to “reduce hierarchy and increase in-person collaboration.” did. Last week, the company recalled its Pixy selfie drone due to the risk of fire due to battery overheating.
Mark Zuckerberg has faced accusations of being irresponsible in his approach to artificial intelligence after working to develop AI systems as powerful as human intelligence. The Facebook founder has also raised the possibility of making it available to the public for free.
Meta’s CEO announced that the company intends to build an artificial general intelligence (AGI) system and plans to open source it, making it accessible to outside developers. He emphasized that the system should be “responsibly made as widely available as possible.”
In a Facebook post, Zuckerberg stated that the next generation of technology services requires the creation of complete general-purpose intelligence.
Although the term AGI is not strictly defined, it generally refers to a theoretical AI system capable of performing a range of tasks at a level of intelligence equal to or exceeding that of humans. The potential emergence of AGI has raised concerns among experts and politicians worldwide that such a system, or a combination of multiple AGI systems, could evade human control and pose a threat to humanity.
Zuckerberg expressed that Meta would consider open sourcing its AGI or making it freely available for developers and the public to use and adapt, similar to the company’s Llama 2 AI model.
Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the United Nations advisory body on AI, expressed concern about the potential for open source AGI, calling it “really, very scary” and labeling Zuckerberg’s approach as irresponsible.
According to Mr. Hall, “Thankfully, I think it will still be many years before those aspirations become a reality.” She stressed the need to establish a regulatory system for AGI to ensure public safety.
Last year, Meta participated in the Global AI Safety Summit in the UK and committed to help governments scrutinize artificial intelligence tools before and after their release.
Another UK-based expert emphasized that decisions about open sourcing AGI systems should not be made by technology companies alone but should involve international consensus.
In an interview with tech news website The Verge, Zuckerberg indicated that Meta would lean toward open sourcing AGI as long as it is safe and responsible.
Meta’s decision to open source Llama 2 last year drew criticism, with some experts likening it to “giving people a template to build a nuclear bomb.”
OpenAI, the developer of ChatGPT, defines AGI as “an AI system that is generally smarter than humans.” Meanwhile, Google DeepMind’s head, Demis Hassabis, suggested that AGI may be further out than some predict.
OpenAI CEO Sam Altman warned at the World Economic Forum in Davos, Switzerland, that further advances in AI will be impossible without energy supply breakthroughs, such as nuclear fusion.
Zuckerberg pointed out that Meta has built an “absolutely huge amount of infrastructure” to develop the new AI system, but did not specify the development timeline. He also mentioned that a sequel to Rama 2 is in the works.
Apple expressed concerns about potential “irreparable harm” after the White House backed a ban on imports of certain watches due to a dispute over blood oxygen technology.
The tech giant has submitted an emergency motion to the court, seeking permission to continue selling two popular models, the Series 9 and Ultra 2, until the patent dispute with medical monitoring tech company Masimo is resolved.
Apple has requested the ban to be temporarily lifted until U.S. Customs determines whether a redesigned version of its watch infringes Masimo’s patents, with a decision expected on January 12th.
Masimo has accused Apple of stealing pulse oximetry technology for monitoring blood oxygen levels and incorporating it into their watch, as well as luring some of its employees to switch to Apple.
The US ITC has ordered a ban on the import and sale of models utilizing blood oxygen level reading technology.
Wealth management analyst Dan Ives stated that the halt in watch sales before the holiday season could cost Apple $300-400 million, but the company is still expected to make nearly $120 billion in sales for the quarter, including the holiday period.
Read more: – Have an old iPhone? You could be entitled to compensation in a UK court case – Apple updates iPhone 12 software after radiation test
U.S. Trade Representative Katherine Tai upheld the ITC’s decision, but previously purchased Apple Watches with blood oxygen measurement capabilities are not affected by the ban.
Apple contests the ITC’s decision, claiming it is based on factual errors and that Masimo does not sell significant quantities of competing products in the U.S., and would not be harmed by a ban on orders.
US lawmakers are calling on Microsoft to foster a positive relationship with China in the development of AI technology, despite recent efforts by Microsoft’s president, Brad Smith, to increase cooperation with the US adversary. During a meeting with Chinese Minister of Commerce Wang Wentao, Smith expressed the company’s eagerness to contribute to the digital transformation of China’s economy, with China looking forward to Microsoft’s potential collaboration in AI development.
However, this has raised concerns among US lawmakers and commentators, who fear that Microsoft’s extensive presence in China could pose a national security risk. Senator Josh Hawley has pushed back against Microsoft’s partnership with China, emphasizing the Chinese government’s desire for AI supremacy and the potential risks associated with such collaboration.
Rep. Mike Gallagher also expressed similar concerns, calling for stronger export controls for AI and other critical technologies due to the Chinese government’s intentions for sinister use of advanced AI tools.
The US-China relationship has recently deteriorated, and concerns over national security risks associated with Microsoft’s collaboration have been heightened. Microsoft CEO Satya Nadella emphasized the company’s primary focus on global markets excluding China, distancing the company from doing business with the Chinese government. However, the company has stressed its commitment to responsibly and ethically developing AI technology in China.
Despite heightened scrutiny and concerns, Microsoft continues to expand its operations in China, facing criticism from US lawmakers over potential exploitation of its technology by the Chinese government. Other US tech companies, such as Google and Meta, have pulled back from the region due to disputes with the Chinese government and increased US scrutiny.
Microsoft’s cooperation with China has raised concerns of technology transfers and potential security risks, as China has gained access to sensitive information about AI products and has been accused of misusing advanced technologies for human rights abuses.
Overall, Microsoft’s presence in China and its efforts to collaborate in AI development have sparked concern among US lawmakers and commentators, who fear the potential national security risks associated with such partnerships.
Tesla is recalling 120,423 vehicles in the United States due to the risk of doors unlocking in the event of a crash, according to a report on Friday.
According to Reuters, the country’s traffic safety regulator, the National Highway Traffic Safety Administration, on Friday announced a recall affecting 2021-2023 model year Model S and Model It said it did not meet federal safety standards. .
Tesla has released an over-the-air software update to address this issue.
Tesla last week carried out the largest recall ever in the Elon Musk-led company’s 20-year history, recalling more than 2 million vehicles in the U.S. and nearly all vehicles on U.S. roads. Vehicles were targeted.
Federal regulators say Tesla’s advanced driver-assistance system, Autopilot, has “inadequate” safeguards against misuse, and the company is warning drivers to remain on the road even when Autopilot is engaged. A voluntary recall has been launched to carry out “additional inspections” to remind people to be careful.
According to the Washington Post, NHTSA wrote last week that activating the driver-assistance system Autopilot “may increase the risk of a collision,” adding, “The driver is not responsible for operating the vehicle and is at risk of an accident.” “I’m not ready to intervene.” need. “
The recall applies to 2021-2023 Model S (above) and Model X vehicles, which do not meet certain federal safety standards for side-impact protection. APModel X is also subject to a recall. AP
Other major automakers also announced recalls this week.
Toyota Motor Corp. said Wednesday it is recalling 1 million vehicles due to a defect that could prevent airbags from deploying if a sensor in the passenger seat shorts out. According to the Associated Press, the recall applies to Toyota Avalon, Camry, Highlander, RAV4, Sienna, Corolla, and some hybrid versions of these models, as well as some Lexus models such as the ES250 sedan and RX350 SUV. It is said that she is a model.
Honda on Monday said it was shutting down more than 2.5 million vehicles due to fuel pump problems that could cause the engines to not start or stall while driving, increasing the risk of crashes and injuries, NHTSA said. announced that it had been recalled.
Elon Musk’s Tesla recalled more than 2 million vehicles last week over concerns about Autopilot. Getty Images
General Motors is discontinuing sales of some 2024 Chevrolet Silverado and GMC Sierra trucks due to concerns about cracking metal in the passenger-side roof, according to a document released Wednesday by NHTSA. As a result, approximately 3,067 vehicles will be inspected.
Last month, Toyota recalled 1.9 million RAV4 SUVs due to battery deterioration that could cause a fire.
Members of the Congressional Black Caucus sent a letter to Acting U.S. Labor Secretary Julie Su expressing concerns about the disproportionate impact high-tech layoffs could have on Black workers, according to a letter obtained by TechCrunch. expressed.
It was first reported The GrioThe letter includes steps the Department of Labor has taken to monitor the impact of technology layoffs on African Americans, regulations regarding business practices, and recent Supreme Court precedents to ensure that they are not treated unfairly. Contains a list of questions regarding Used to undermine a company’s DEI practices and budgets.
The technology industry has cut more than 240,000 jobs this year due to layoffs. The concern here is that the “last-in, first-out” approach to layoffs commonly adopted by companies may not be effective for new employees and less senior “non-essentials”, who are most likely to be in the minority. This could potentially affect employees in an emergency.
“Laying off the most recent hires directly impacts a group of people who have benefited from new diversity policies introduced in response to heightened race-based conversations in 2020,” the letter said. “have a significant impact.”
“While corporations reap billions in profits, Black, brown, and women tech workers bear the brunt of layoffs,” said Missouri Congressman Emanuel Cleaver, co-chair of the CBC. We’ve seen it happen,” Missouri Congressman Emanuel Cleaver, co-chair of the CBC, told TechCrunch. “Member of Parliament [Barbara] Lee and I as co-chairs CBC TECH2025is calling on governments to take steps to address this harmful and troubling trend. ”
The Ministry of Labor has not yet responded to the letter dated December 15th. A Ministry of Labor representative said, “We can confirm that we have received the letter and are considering it.”
The technology and venture industries have been facing a recession in recent years. In response to the 2020 killing of George Floyd, many companies pledged to support the Black community.But as the market slumps, the diversity pledge lack of fundsDEI jobs are being cut, and venture capital funding to Black founders continues to decline every quarter.
CBC is also being strengthened.Last week, it was I have written It called on Sam Altman and the OpenAI board to “quickly diversify the board to include subject matter expertise with perspectives from the African American community.” OpenAI Board of Directors I don’t have it at the moment Whether it’s women or people of color.
Updated to add comment from DoL. The headline has been updated to reflect that they are representatives, not senators.
The move follows a competitive objection filed against Google in Germany this summer over the bundling of Google Maps and other services through its Android-based in-vehicle infotainment system software, known as Google Automotive Services (GAS). The tech giant will eliminate some service bundling and contractual restrictions that apply to automakers to resolve regulatory intervention.
Google’s proposed remedies will be applied to the automaker in a market test by Germany’s competition regulator, which will then determine whether it resolves the problems it has identified.
Back in June, this country’s Federal Cartel Office (FCO) sends statement of objection He spoke to tech giants about how to operate GAS, specifically referring to the Google Maps, Google Play, and Google Assistant bundles that Google offers automakers.
The statement also highlighted Google’s practice of giving a portion of its advertising revenue to automakers only if they refrain from pre-installing other voice assistants next to their voice AI. Another concern raised by the FCO is that Google requires GAS license holders to set bundled services as default or prominently display them. It also took issue with Google’s refusal to restrict or allow interoperability of services included in GAS with third-party services.
At the time, the FCO said its preliminary view of Google’s practices around GAS was that they did not comply with German competition rules for large digital companies. This would give the FCO greater freedom to intervene where it suspects competition is being undermined.
“In particular, we are critical of Google’s ability to offer its services for infotainment systems only as a bundle. This reduces the opportunity for competitors to sell competing services as individual services. body,” the FCO said in the summer.
Regulators said they will now carefully consider Google’s proposal to determine whether an appropriate level of separation of its services from in-vehicle infotainment platforms would address competition concerns.
“We are particularly concerned about the forced bundling of the reach of services with significant market power with those with less power. “This is particularly problematic as a way to ‘infiltrate’ the market,” said FCO Chairman Andreas Mundt. press release Google is expected to announce its proposal on Wednesday. “It may reduce the opportunity for our competitors to sell competing services. We will now look very closely at whether Google’s proposal can effectively eliminate the practices that raised our concerns.” ”
Google’s proposed remedy to address the FCO’s competition concerns provides three products separately in addition to the GAS product bundle: Google Maps OEM Software Development Kit, Google Play Store, and Cloud Custom Assistant. This means that automakers will be able to: Develop mapping and navigation services with functionality comparable to Google Maps.
The addition of Google Play Store also allows end users to download a wider selection of third-party apps, alleviating concerns that they will be steered toward using Google’s own apps. Cloud Custom Assistant is described as a “proprietary AI voice assistant solution” for use in vehicles to enable automakers to offer competitive assistants.
The tech giant is also proposing to remove contractual clauses it imposes on advertising revenue sharing provided its proprietary Google Assistant voice AI is exclusively pre-installed on the GAS infotainment platform. .
“Google is also prepared to remove contractual provisions relating to setting Google services as a default application or displaying them prominently on infotainment platforms,” the FCO said. “Finally, Google stands ready to enable licensees to combine the Google Assistant service with other mapping and navigation services and provide the technical prerequisites to create the necessary interoperability.”
“Based on the results of market testing, federal cartel ramt [FCO] It will be determined whether Google’s proposal generally addresses concerns that have been addressed to date. The question of whether Google’s proposal amounts to a bundled offering of Google’s services in the automotive sector will become decisive in this context.”
Google was asked for comment on the proposal.
The technology giant’s business was placed under Germany’s Special Competition Abuse Regulation Regime in January 2022. Since then, the FCO has extracted a number of concessions from the company over how it operates, including securing an agreement on Google’s data reform this autumn. Under the terms, users will be able to gives you more choice in how you can use your information. Last year, Google also proposed limiting how news content it licenses from third-party publishers appears in search results to address regulators’ concerns about self-preference.
Germany’s digital competition restart applies only to designated high-tech giants within the market, but companies may choose to apply product changes globally to manage operational complexity (For example, by launching a new account center, as Meta did this summer, users are opting out of cross-site tracking after the FCO intervened, and the company plans to roll this out globally.) announced).
The European Union has also recently implemented its own pre-competition reforms in the form of the Digital Markets Act (DMA) targeting so-called internet gatekeepers. The FCO’s enforcement against Big Tech therefore raises the possibility of what action will be taken across the bloc next year, when compliance deadlines for the six targeted his DMA gatekeepers and their 22 core platform services begin next year. You can get a glimpse of what’s going on. This list includes Google Maps, Google Play, Google Shopping, Google Ads, Google Chrome, Google Android, Google Search, and YouTube, the Google-owned video sharing platform.
Notably, the EU has not designated GAS as a core platform service. This may partly explain the FCO’s focus on GAS here, as competition regulators across the region seek to avoid duplication of intervention. (Germany’s status as a major automaker may also facilitate scrutiny of Google’s automotive software and services.)
The FCO also began proceedings on Google Maps in June 2022, some time before the DMA was approved by the bloc’s co-members.
On the other hand, the pan-EU regulation began to be applied in May 2023. However, the deadline for DMA gatekeepers to comply is March 2, 2024, so a full restart of Big Tech competition across the EU will not occur until then. next year. This may be enough reason for the FCO to continue monitoring Google Maps for some time. (In this regard, the German regulatory authorities also Said The EU will continue to “cooperate closely” with EU competition authorities on regulating the digital economy.
As of June 2023, the FCO has announced that it will continue to investigate Google’s terms of use for the Google Maps Platform (GMP), and in a preliminary assessment, the tech giant will end restrictions on combining its own GMP mapping services. Use a third party map service that you mentioned you need to type.
“These restrictions could hinder competition between applications relating to mapping services used by, for example, logistics, transport and delivery service providers,” the FCO said at the time. “It could also negatively impact competition among services for vehicle infotainment systems by making it more difficult for map service providers to develop effective alternatives to Google Maps.”
Ex-ante competition law reforms in Germany and across the EU are aimed at curbing fraudulent practices by digital giants that could further consolidate their vast market power, and European regulators are looking to move ahead with these more aggressive reforms. We hope that such interventions will have a better effect on correcting the imbalances in the digital economy. The implementation of a classic competition could be achieved. (A related example of classic enforcement is the 123 million fine that Italy’s competition watchdog imposed on Google in May 2021 over restrictions it applied to third-party app makers via its Android Auto in-car software.) There is a dollar fine.)
With more than 1.17 billion phone connections and 881 million internet subscribers, India aims to modernize connectivity and introduce new services such as satellite broadband just months before general elections. Congress passed a telecommunications bill that replaced the 100-year-old rule.
India’s upper house of parliament on Thursday approved the Telecommunications Bill 2023 by voice vote, with many opposition leaders absent due to suspension, just a day after the bill was passed by the lower house. The bill would repeal rules dating back to 1885 during the telegraph era, giving Prime Minister Narendra Modi’s government a mandate to use and manage telecommunications services and networks in the interest of national security, and to It gives the authority to monitor data. There is also a basis for the Indian government to intercept communications.
A newly passed telecommunications bill also allows spectrum to be allocated to satellite-based services without participating in an auction, and OneWeb wants to launch satellite broadband services in the world’s most populous country. The move is to give preferential treatment to companies such as , Starlink, and Amazon’s Kuiper. A long-standing demand for a “management process” surrounding spectrum allocation auctions. India’s Jio is trying to compete with three global companies with its homegrown satellite broadband service, but has relatively limited resources and has previously faced administrative opposition to its spectrum allocation model. Ta.
The bill also requires biometric authentication for subscribers to limit fraud and limits the number of SIM cards each subscriber may use. Additionally, it includes provisions for civil monetary penalties of up to $12,000 for violations of certain provisions and up to $600,400 for violations of conditions established by law.
The bill includes amendments to the Indian Telecom Regulatory Authority Act, 1997, targeting the telecom regulator, as the Indian government seeks to attract foreign investors by increasing private participation. These amendments would allow executives with more than 30 years of private sector experience to be appointed to regulatory agency positions. The chairman can become a member if he or she has served for 25 years or more. The country previously allowed only retired civil servants to serve as chairmen and commissioners of regulators.
“This is a very comprehensive and very large-scale structural reform born out of the vision of Prime Minister Shri Narendra Modi Ji. The legacy of old fraudsters in the telecom sector will remain and this bill Arrangements will be made to make the telecom sector a rising sector through this,” said Ashwini Vaishno, India’s Telecom Minister, while introducing the bill in Parliament.
Interestingly, the Telecommunications Bill excludes the term “OTT” that was used in the first draft last year, setting out regulations for over-the-top (OTT) messaging apps such as WhatsApp, Signal, and Telegram. . Industry groups such as the Internet and Mobile Association of India, whose members include Google and Meta, have praised the changes. However, the scope of the regulation is not clearly defined throughout the document. Shivnath Thukral, head of India public policy at Meta, warned in an internal email that the government may have the power in the future to classify OTT apps as telecommunications services and subject them to licensing regimes. report By Indian outlet Moneycontrol.
Digital rights activists and privacy advocates have also raised concerns about the ambiguity surrounding the regulations and the lack of public consultation on the final version of the bill.
Apal Gupta, founding director of the digital rights group Internet Freedom Foundation, said at a public event earlier this week that the bill lacks safeguards for those targeted.
“The Ministry of Telecommunications still refuses to create a central repository on internet shutdowns, thereby reducing transparency. We are completely ignoring the core of the required telecommunications rules.” he emphasized.
Digital rights group Access Now called for the bill to be withdrawn and a new draft to be drafted through consultation.
“This bill is regressive because it strengthens colonial-era governments’ powers to intercept communications and shut down the internet. It undermines end-to-end encryption, which is critical to privacy.” said Namrata Maheshwari, Asia-Pacific policy advisor at Access Now, in a prepared statement.
The bill is currently awaiting approval from the President of India to become an official law.
Tesla has recalled nearly all vehicles sold in the United States to fix a flaw in Elon Musk’s electric car company’s Autopilot driver assistance system. The move comes after Virginia authorities discovered the vehicle’s software had been activated during a previous fatal crash. July.
The recall of more than 2 million vehicles, reportedly the largest in Tesla history, was revealed as part of an ongoing investigation by the National Highway Traffic Safety Administration.
The investigation, which began more than two years ago and includes an investigation into 956 crashes in which Autopilot was implicated, found that existing safety measures “may not be sufficient to prevent driver misuse of the software.” It was determined that there is.
“In certain situations, when Autosteer is activated and the driver is not responsible for operating the vehicle and is not prepared to intervene if necessary, or when Autosteer is canceled or activated. Failure to recognize when it is not present can increase the risk of a crash,” NHTSA said in a release.
Electric car manufacturer announces recall This will consist of an over-the-air software update that was expected to be rolled out on Tuesday or a little later. This update applies to Tesla Model 3, Model S, Model X, and Model Y vehicles manufactured in certain years, including those dating back to 2012.
NHTSA is still investigating the crash that led to the death of Pablo Teodoro III. WRC TV
The vehicle will be provided with “additional controls and warnings” to remind drivers to take precautions when using Autopilot, such as keeping both hands on the steering wheel and keeping their eyes on the road.
Tesla shares fell more than 1.5% in Wednesday trading before closing up 1%.
Pablo Teodoro III had activated Autopilot before the fatal crash, officials said. Handouts to families
A spokeswoman for the Fauquier County Sheriff’s Office said Teodoro appeared to have taken action a second before the accident, but it was unclear what he did.
The investigation also found that the car’s systems “recognized something on the road and sent a message.”
NHTSA is still investigating the crash.
The recall also Washington Post’s shocking report Tesla claimed it was allowing Autopilot to be used in areas the software was not designed to handle.
Tesla is facing intense scrutiny over its Autopilot software. AP
The media claimed to have found at least eight fatal or serious accidents involving Tesla Autopilot on roads where “driving assistance software cannot reliably operate,” such as roads with hills or sharp curves.
In response to this article, Tesla defended the safety of its Autopilot software with a lengthy argued that “we have a moral obligation to keep improving what is already the best product.” -In-class safety system. ”
Elon Musk claims Autopilot is safe. Reuters
“The data is clear: the more automation technology provided to support drivers, the safer they and other road users will be,” the company said.
Tesla President Elon Musk reiterated that Autopilot is safe to use and emphasized the company’s commitment to developing driver assistance and fully self-driving features as an important part of the company’s long-term plans.
One of Google’s most vocal critics says Google’s “catastrophic” antitrust loss this week to “Fortnite” maker Epic Games is a huge blow to Big Tech companies and other companies. This could potentially change the situation completely, potentially exposing the company to a wave of restructuring. Matt Stoller, director of research at the antitrust watchdog American Economic Liberties Project, said the jury’s unanimous verdict that Google maintained an illegal monopoly through the Android app store was a sign that “the truly powerful Big Apple… This is the first time a “tech company” has lost a major antitrust case. case. “There will be appeals and things like that, but I think over the next five years or so Google will start to settle and agree to splits because they know they’re going to lose.” , it’s not worth it. There is a lot of legal uncertainty.” Stoller told journalist Glenn Greenwald on his show “System Update.” “I know there’s a lot of cynicism, but this is actually how we’re going to rebuild these companies,” Stoller added. “It’s kind of amazing that it actually works.” “It’s over.”Google just lost a major antitrust lawsuit brought by Epic Games, the first judgment of its kind against a major tech company.The potential impact on Google, Amazon, Facebook, and other companies cannot be overstated.@MatthewStoller I’ll explain 👇 pic.twitter.com/aaGQ96Bcgu— System Update (@SystemUpdate_) December 13, 2023 Stoller added that the jury’s decision sets an important new legal precedent that is likely to influence the process in a range of antitrust cases facing Google and other large companies. Google is awaiting a judge’s ruling on a landmark Justice Department case targeting its online search empire, as well as separate investigations into its digital advertising business and Google Maps business. “All of a sudden, there’s a precedent and these sneaky judges are going to have to find reasons to rule in favor of Google, whereas before they had to find reasons to rule against Google. Deaf,” Stoller said. “I think all of these lawsuits are going to be overturned, and it’s going to be much harder for Google to win the lawsuits.” As The Post reported, experts say the Google v. Epic ruling could upend the business model that underpins the company’s lucrative Play Store. The Play Store previously charged large companies up to a 30% fee on in-app purchases and required them to: Use your company’s pricing system. Matt Stoller is the research director of the American Economic Liberties Project, an antitrust watchdog group. X/@SystemUpdate_ U.S. District Judge James Donato will next decide which illegal business practices Google must eliminate. A judge could order Google to stop paying major app developers to discourage them from launching competing app stores and suspend billing requirements, among other remedies. . In May 2024, Judge Amit Mehta will decide Google’s fate in a Justice Department lawsuit that alleges it has maintained an illegal monopoly over online search. The Post reached out to Google for comment on Stoller’s comments. Google faces a series of antitrust battles in the future. EPA Meanwhile, Google has already announced plans to contest the verdict in the Epic lawsuit. “Android and Google Play offer more choice and openness than any other major mobile platform,” said Wilson White, the company’s vice president of government affairs and public policy. “This trial makes clear that we are in intense competition with Apple and its App Store, as well as the App Store for Android devices and game consoles.”
Tesla is recalling more than 2 million vehicles in the United States over concerns about its advanced driver assistance system, Autopilot.
The National Highway Traffic Safety Administration (NHTSA) said the system’s methods of determining whether drivers are paying attention may be inadequate and could lead to “foreseeable abuse of the system.”
NHTSA is investigating Elon Musk’s Over two years, the company has suffered a series of crashes, some fatal, that occurred while using the Autopilot system.
tesla He said Autopilot’s software system controls “may not be sufficient to prevent driver misuse” and could increase the risk of a crash.
Tesla’s Autopilot is intended to allow the car to automatically steer, accelerate, and brake within the line, but while the enhanced Autopilot can assist with lane changes on the highway, self-driving It won’t be.
Use Chrome Browser for a more accessible video player
0:25
From August: Tesla car catches fire ‘spontaneously’ at scrapyard
One of the Autopilot components is Autosteer, which maintains a set speed or following distance and works to keep the vehicle within its lane of travel.
Tesla disagrees with NHTSA’s analysis, but notes that “additional controls and warnings already exist in affected vehicles to further encourage drivers to comply with ongoing driving responsibilities each time Autosteer engages.” “We will deploy an over-the-air software update that incorporates this.” “I’m engaged.”
The update says it includes increased prominence of visual alerts on the user interface, easier activation and deactivation of Autosteer, and additional checks when Autosteer is activated.
Tesla added that the update will eventually result in a driver’s use of Autosteer being suspended if the driver “repeatedly fails to demonstrate continued and sustained driving responsibility while the feature is activated.” .
read more: UK could be shut down ‘at any time’ due to cyber attack Amazon reveals the most asked questions for Alexa in 2023
The recall applies to models Y, S, 3, and X produced between October 5, 2012 and December 7 of this year.
The update was expected to be sent to some affected vehicles on Tuesday, with the remaining vehicles sent out later.
NHTSA will continue its investigation into Autopilot “to monitor the effectiveness of Tesla’s remedies,” the agency said.
Since 2016, regulators have investigated 35 Tesla crashes in which the vehicles were suspected of being driven on automated systems. At least 17 people were killed in the clashes.
It is unclear whether this recall affects Tesla vehicles in other countries, including the UK.
This is the second time this year Tesla recalls its vehicles In the United States.
A former program manager for Blue Origin’s BE-4 rocket engine has filed a lawsuit against the company, alleging whistleblowing retaliation after speaking out about safety issues.
The complaint was filed Monday in Los Angeles County Superior Court. It includes a detailed story about program manager Craig Stoker’s seven-month effort to raise concerns about Blue Origin’s safety and harsh working conditions.
Stoker reportedly told two vice presidents in May 2022 that then-CEO Bob Smith’s actions caused employees to “understand safety procedures to meet unreasonable deadlines.” “Frequently violates procedures and processes,” he said. The suit says Smith “exploded” when problems arose, creating a hostile work environment. Mr. Stoker sent a follow-up email containing a formal complaint against Mr. Smith to two vice presidents: Linda Koba, vice president of engine operations, and Mary Plunkett, senior vice president of human resources.
“Myself, my management team, and others within the company do not need to constantly apologize or make excuses to ourselves or our team for the CEO’s bad behavior,” the email said. There is. “We spend a significant amount of time trying to keep things running smoothly, boosting morale, repairing damage, and stopping people from overreacting. . . . Hostile work environment. . . . Our employees , creating a safety and quality risk to our products and customers.”
TechCrunch has reached out to Blue Origin for comment and will update this article if we hear back.
When Mr Stoker asked about a separate investigation into Mr Smith’s actions, Mr Plunkett said the investigation had concluded and Mr Smith was being “coached”.
Just months after filing a formal complaint, Stoker learned that a fellow employee had nearly suffocated while working under an engine nozzle. He expressed his concerns to Michael Stevens, vice president of safety and mission assurance. The complaint says Stoker was “ignored.” In August, Stoker sent another email to executives saying nine people on the engine team were working “over 24-hour” shifts to deliver engines on time to customer United Launch Alliance. expressed concern.
There is no doubt that the company was under pressure to deliver. Blue Origin’s BE-4 will power United Launch Alliance’s Vulcan rocket, which is expected to make its much-delayed debut around Christmas. According to the complaint, Blue Origin’s contract with ULA requires the company to provide one year’s notice of any issues that could affect the delivery of its rocket engines. Stoker wanted to tell ULA that the engine might be delayed.
However, Smith allegedly instructed Stoker not to share these production or delivery issues with ULA.
Ultimately, after an internal investigation, Blue Origin HR concluded that Mr. Smith did not create a hostile work environment or violate company policy. Stoker disagreed with this conclusion. Stoker later learned that officials from the engine program had not been interviewed as part of the investigation, according to the complaint.
The complaint alleges that the human resources department was reluctant to conduct an investigation because the accuser, Mr. Stalker, was a man. “Being a man, Human Resources expected him to deal with problems on his own and not do too much ‘whining,’ and Mr. Stoker was given no means or resources.” He expressed his concerns to the company’s most powerful executive. ”
Stoker was fired on October 7, seven months after he first raised safety concerns. The complaint makes clear who was behind this decision. “Smith spearheaded this termination due to complaints against Mr. Stoker, raising safety/ethics/legal issues, and the fact that many of these reports were intended to disrupt his production/delivery schedule. Ta. “
Blue Origin has announced that Bob Smith will step down as CEO in September after nearly six years. His tenure was a successful one, growing the team from less than 1,000 people to more than 12,000 people and signing numerous high-profile and high-paying contracts with NASA. But it has not been without serious controversy, including allegations of a culture of sexism among senior executives.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.