Tesla Shareholders Accused of Overstating Robotaxi Potential

Tesla investors have filed a lawsuit against Elon Musk and the company for allegedly hiding significant risks associated with the firm’s self-driving vehicles.

The class action lawsuit, which alleges securities fraud on behalf of Musk and Tesla, was submitted on Monday evening. Tesla launched its first public trial of its self-driving taxis in late June close to its Austin, Texas, headquarters. Observations from the test included instances of the vehicle accelerating unexpectedly, rapid braking, mounting the curb, driving against traffic, and dropping off passengers in the center of a busy road. The National Highway Traffic Safety Administration (NHTSA), the main regulatory body for U.S. transportation, is probing the pilot testing of Robotaxi.

Investors claimed that Musk and Tesla systematically overstated the effectiveness and potential of autonomous driving technology, which artificially inflated Tesla’s financial forecasts and stock prices. Following the commencement of testing, Tesla’s stock plummeted by 6.1%, erasing about $68 billion in market capitalization.

Shareholders pointed to Musk’s assurances during the April 22 conference call, where he stated that Tesla was “laser-focused” on launching Robotaxi in Austin that June and claimed that their approach to autonomous driving would enable a “scalable and safe deployment across varied terrains and scenarios.”

Tesla has not responded to requests for comments as of Tuesday. The company’s CFO, Vaibhav Taneja, and his predecessor, Zachary Kirkhorn, are also named in the lawsuit.

The growth of Robotaxis is critical for Tesla as it contends with diminishing demand for aging electric vehicles and resistance to Musk’s political views.

Musk, known as the world’s richest individual, claims that the service will reach half the U.S. population by the year’s end, but he first needs to persuade regulators and the public of the safety of his technology. He asserts that Robotaxi services have expanded into the San Francisco Bay Area, where it was previously based; however, regulations have hindered Tesla from offering paid autonomous rides without a new permit, as reported by the Ministry of Automobile.

On August 1, Florida deputies discovered that 33% of a driver’s liability in connection with a 2019 crash involving the self-driving software resulted in the death of a 22-year-old woman, injuring her boyfriend and incurring damages amounting to roughly $243 million. Tesla plans to contest the driver’s liability and will appeal the decision.

Skip past newsletter promotions
Quick Guide

Please contact us about this story

show

The best public interest journalism relies on direct accounts from people of knowledge.

If you have anything to share about this subject, please contact us confidentially using the following methods:

Secure Messages in Guardian App

The Guardian app has a tool to send tips about stories. Messages are end-to-end encrypted and implied within the routine activity that all Guardian mobile apps perform. This prevents observers from knowing you are in absolutely communication with us.

If you don’t already have the Guardian app, please download it (iOS/Android) and go to the menu. Select Secure Message.

Securedrop, Instant Messenger, Email, Phone, Posting

For alternatives and the advantages and disadvantages of each, please refer to the guide at guardian.com/tips.

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.

Source: www.theguardian.com

Judge Rules Men Accused of Hacking Can Be Sent to U.S. for Trial

A British court has approved the extradition of an Israeli individual charged by a New York prosecutor in a case involving an operation dubbed “hacking fatalen,” aimed at environmental organizations.

According to prosecutors, the company operated by 57-year-old Amit Forlit allegedly earned over $16 million by hacking more than 100 victims and stealing confidential data while working for major oil companies on behalf of a lobbying firm.

In a court submission from January, Forlit’s attorneys identified the company as ExxonMobil. Exxon is currently facing lawsuits from Democratic lawyers and local officials regarding its role in climate change, with claims that it has concealed knowledge about climate change for decades to maintain its oil sales. The lobbying firm mentioned in the filing is known as DCI Group.

Exxon has stated that it was not involved in and had no knowledge of the hacking activities, emphasizing, “If hacking is involved, we will condemn it in the strongest possible terms.”

A spokesman for DCI, Craig Stevens, stated that the firm has instructed its employees and consultants to follow the law and asserted that none of DCI’s guidance was linked to the hack that allegedly occurred a decade ago.

DCI also referred to “numerous billionaire donors still benefiting from the fossil fuel legacy,” describing them as “financiers of radical anti-oil activists and their billionaire backers.”

This remark hinted at the Rockefellers’ involvement in supporting organizations pursuing climate change litigation. The Rockefeller heirs, who amassed oil fortunes over a century ago, lead the Rockefeller Family Fund, which plays a significant role in the movement to sue oil companies over climate change. Lee Wasserman, its director, has reported being targeted in a hacking initiative.

Last year, Forlit was arrested in connection with a major trial in New York for allegedly committing wire fraud, conspiracy to commit wire fraud, and hacking offenses that could lead to lengthy prison sentences. His legal team contended that he should not be extradited due to concerns about a fair trial in the U.S., given the political climate surrounding climate change litigations.

They argued that “one motive for the prosecution appears to be an effort to advance political agendas against ExxonMobil, with Forlit being collateral damage.”

Forlit’s attorneys also expressed concerns about his safety at the Metropolitan Detention Center, New York’s only federal prison, which has been criticized for violence and dysfunction. High-profile detainees have included individuals such as Luigi Mangione, Sam Bankman-Fried, and Shawn Combs (Puff Daddy/Diddy).

The Westminster Magistrate’s Court dismissed these worries, but Forlit has the option to appeal. His attorney did not immediately respond to inquiries for comments.

One targeted entity was a coalition of concerned scientists who have extensively researched the fossil fuel industry’s influence on climate science disinformation. This group also engages in source attribution science, estimating how specific companies contribute to global warming effects like rising sea levels and wildfires. Their findings support lawsuits against the oil sector.

The organization became aware of hacking attempts following a 2020 report from Citizen Lab, a cybersecurity watchdog from the University of Toronto, which revealed that hackers were targeting American nonprofits working on the #ExxonKnew campaign.

A coalition of concerned scientists has received suspicious emails in which hackers attempted to extract passwords or deploy malicious software. Prosecutors from the U.S. Attorney’s Office in the Southern District of New York have initiated an investigation.

One of Forlit’s associates, Aviram Azari, pleaded guilty in New York to charges including computer breaches, wire fraud, and identity theft, receiving a six-year prison sentence.

Forlit manages two Israel-registered security and intelligence newsletter firms, one of which is registered in the U.S. His clientele includes a lobbying firm representing “one of the world’s largest oil and gas companies” involved in ongoing climate change litigation. Exxon has its historical roots in Irving, Texas.

The lobbying firm selected targets for Mr. Forlit, who then passed the list to Azari. Azari, who owned another Israeli-based company, employed individuals from India to gain illegal access to accounts. This information was reportedly utilized to gather documents from oil companies and the media, allegedly undermining the integrity of the civil investigation, according to the filings.

Source: www.nytimes.com

EU accused of creating “devastating” copyright loopholes in AI laws

EU copyright law architects assert the necessity of the law to safeguard writers, musicians, and creatives left vulnerable by the “irresponsible” legal gap in the EU’s artificial intelligence legislation.

This intervention occurred as 15 cultural organizations penned a letter to the European Commission, highlighting a draft rule under the AI Act that cautioned about copyright protections being compromised and a concerning legal loophole being exploited.

Axel Voss, a member of the European Parliament, emphasized that the 2019 copyright directive was not designed to address generative AI models, raising concerns about the unintended consequences of the law.

The introduction of ChatGpt, an AI chatbot capable of generating content like essays and jokes, has brought attention to the urgent need for copyright protections in light of the rapid advancements in AI technology and their impact on creative works.

Issues arising from the EU AI legislation negotiations have highlighted the challenges of securing strong copyright safeguards to protect creative content, with concerns surrounding the legal gap that favors Big Tech over European creatives.

The debate around AI and copyright law intensifies as generative AI models like ChatGpt and Dall-E become more widely used, leading to legal disputes over copyright infringement and the ethical implications of using AI to produce creative content.

The lack of enforceable rights for authors and creators in the AI law framework has raised alarms among cultural organizations and industry stakeholders, prompting calls for greater transparency and accountability in the use of AI technologies.

As the European Commission considers the future of AI regulation and its implications for copyright protection, the need for robust measures to safeguard the rights of creatives and uphold the integrity of their work remains a top priority.

Source: www.theguardian.com

Elon Musk under fire for sharing edited Kamala Harris video and accused of spreading misinformation

Kamala Harris’ campaign has accused Tesla CEO Elon Musk of spreading “manipulated lies” after he shared a fake video of the vice president on his X account.

Musk reposted a video on Friday evening that had been doctored to show Harris saying, “I was selected because I’m the ultimate diversity hire,” along with other controversial statements. The video has garnered 128 million views on Musk’s account. He captioned it with “This is awesome” and a laughing emoji. Musk owns X, which he rebranded from Twitter last year.

Democratic Senator Amy Klobuchar criticized Musk for violating platform guidelines on sharing manipulated media. Users are not allowed to share media that may mislead or harm others, although satire is permitted as long as it doesn’t create confusion about its authenticity.

Harris’ campaign responded by stating, “The American people want the real freedom, opportunity, and security that Vice President Harris is providing, not the false, manipulated lies of Elon Musk and Donald Trump.”

The original video was posted by the @MrReaganUSA account, associated with conservative YouTuber Chris Coles, who claimed it was a parody.

However, Musk, a supporter of Donald Trump, did not clarify that the video was satire.

California Governor Gavin Newsom stated that the manipulated video of Harris should be illegal and indicated plans to sign a bill banning such deceptive media, likely referring to a proposed ban on election deepfakes in California.

Musk defended his actions, stating that parody is legal in the USA, and shared the original @MrReaganUSA video.

Skip Newsletter Promotions

An expert on deepfakes commented on the video, highlighting the use of generative AI technology to create convincing fake audio and visuals.

Source: www.theguardian.com

Apple accused by UK watchdog of not reporting child sexual images

Child safety experts have claimed that Apple lacks effective monitoring and scanning protocols for child sexual abuse materials on its platforms, posing concerns about addressing the increasing amount of such content associated with artificial intelligence.

The National Society for the Prevention of Cruelty to Children (NSPCC) in the UK has criticized Apple for underestimating the prevalence of child sexual abuse material (CSAM) on its products. Data obtained by the NSPCC from the police shows that perpetrators in England and Wales use Apple’s iCloud, iMessage, and FaceTime for storing and sharing more CSAM than in all other reported countries combined.

Based on information collected through a Freedom of Information request and shared exclusively with The Guardian, child protection organizations discovered that Apple was linked to 337 cases of child abuse imagery offenses recorded in England and Wales between April 2022 and March 2023. In 2023, Apple reported only 267 suspected instances of child abuse imagery globally to the National Centre for Missing and Exploited Children (NCMEC), contrasting with much higher numbers reported by other leading tech companies, with Google submitting over 1.47 million and Meta reporting more than 30.6 million, as per NCMEC reports mentioned in the Annual Report.

All US-based technology companies are mandated to report any detected cases of CSAM on their platforms to the NCMEC. Apple’s iMessage service is encrypted, preventing Apple from viewing user messages, similar to Meta’s WhatsApp, which reported about 1.4 million suspected CSAM cases to the NCMEC in 2023.

Richard Collard, head of child safety online policy at NSPCC, expressed concern over Apple’s discrepancy in handling child abuse images and urged the company to prioritize safety and comply with online safety legislation in the UK.

Apple declined to comment but referenced a statement from August where it decided against implementing a program to scan iCloud photos for CSAM, citing user privacy and security as top priorities.

In late 2022, Apple abandoned plans for an iCloud photo scanning tool called Neural Match, which would have compared uploaded images to a database of known child abuse images. This decision faced opposition from digital rights groups and child safety advocates.

Experts are worried about Apple’s AI system, Apple Intelligence, introduced in June, especially as AI-generated child abuse content poses risks to children and law enforcement’s ability to protect them.

Child safety advocates are concerned about the increase in AI-generated CSAM reports and the potential harm caused by such images to survivors and victims of child abuse.

Sarah Gardner, CEO of Heat Initiative, criticized Apple’s insufficient efforts in detecting CSAM and urged the company to enhance its safety measures.

Child safety experts worry about the implications of Apple’s AI technology on the safety of children and the prevalence of CSAM online.

Source: www.theguardian.com

“British tech company accused of being ‘controlling’ as Mike Lynch fraud trial continues into second day” | Autonomy

British entrepreneur Mike Lynch faced arrest on the first day of his criminal trial, where prosecutors portrayed him as a controlling boss who orchestrated a massive fraud. Lynch is set to appear in court in San Francisco on Tuesday.

Co-founder of Autonomy, Lynch is accused of inflating the software company’s sales, misleading auditors, analysts, and regulators, and threatening those who raised concerns before its acquisition by Hewlett-Packard (HP) in 2011.

Lynch’s lawyers plan to have him testify once prosecutors complete their case against him. He has denied all allegations of wrongdoing and faces up to 25 years in prison if convicted.

A deal by HP to acquire Autonomy for $11.1 billion soured when HP reduced the purchase price by $8.8 billion due to alleged accounting irregularities, omissions, and misstatements in the business.

As the trial commenced, prosecutors called on Ganesh Vaidyanathan, Autonomy’s former head of accounting, as the first witness to testify about accounting issues raised in 2010.

Assistant U.S. Attorney Adam Reeves argued that Lynch presented Autonomy as a successful company to HP but that its financial statements were false and misleading due to accounting tricks and concealing hardware sales.

Chamberlain, Autonomy’s financial director, also pleaded not guilty to charges related to falsifying documents and misleading auditors, with his attorney suggesting he was a pawn caught in a battle between giants.

Lynch alleges Autonomy’s poor performance post-acquisition was due to mismanagement by HP, not wrongdoing before the acquisition, as he spent time preparing for trial under house arrest.

Extradited from Britain to the U.S. last year, Lynch posted bail and wears a GPS tag on his ankle under 24-hour guard surveillance.

Source: www.theguardian.com

Dating Apps Accused of Promoting Addiction in Lawsuit Against Tinder, Hinge, and Match

Many of us have had the negative experience of being swiped left, ghosted, breadcrumbed, or benched on internet dating apps. On Valentine’s Day, six dating app users filed a proposed class action lawsuit alleging that Tinder, Hinge, and other Match dating apps use addictive game-like features to encourage compulsive use. The lawsuit claims that Match’s app “employs perceived dopamine-manipulating product features” that turn users into “trapped gamblers seeking psychological rewards,” resulting in expensive subscriptions and persistent usage.

The lawsuit was met with skepticism by some, but online dating experts say it reflects a wider criticism of the way apps gamify human experiences for profit. The addiction may have been built into dating apps from the beginning, with the swipe mechanism, invented by Tinder co-founder Jonathan Badeen, being compared to an experiment with pigeons that aimed to manipulate the brain’s reward system.

The game-like elements of dating apps are further exemplified in the Trump-style interface first used by Tinder, leading some experts to believe that dating apps are encouraging negative behaviors and making people feel manipulated. A study suggested that couples who met online are slightly more likely to have lower marital satisfaction and stability. Dating apps also appear to encourage “bad behavior such as ghosting, breadcrumbing, and backburner relationships,” according to some researchers.

However, dating apps have also been criticized for perpetuating idealized preferences for particular ethnicities, age groups, and body types, ultimately reproducing privilege. While dating apps widen the range of potential partners in theory, endless access to romantic possibilities has been shown to have negative effects on mental health, leading some experts to advocate for transparency around matching algorithms and education about the pitfalls of online dating.

Despite criticisms, a Match Group spokesperson dismissed the lawsuit, stating that the business model is not based on advertising or engagement metrics, and that the goal is to avoid addictive use of the app. They believe that the plaintiffs are pointing to a systemic problem in the dating app ecosystem.

Source: www.theguardian.com

Meta Accused of Inadequate Child Protection Measures by Whistleblower

According to a whistleblower, Mark Zuckerberg’s Meta Inc. has not done enough to protect children following Molly Russell’s death. The whistleblower claimed that the social media company already poses a risk to teenagers and that Zuckerberg had put in place infrastructure to protect against such content.

Arturo Bejar, the owner of Instagram and Facebook, voiced his concern that the company had not learned from Molly’s death and could have provided a safer experience for young users. Bejar’s survey of Instagram users revealed that 8.4% of 13- to 15-year-olds had seen someone harm themselves or threaten to harm themselves within the past week.

Bejar stressed that if the company had taken the right steps after Molly Russell’s death, the number of people encountering self-harm content would have been significantly lower. Russell, who committed suicide after viewing harmful content related to suicide, self-harm, depression, and anxiety on Instagram and Pinterest, sparked the whistleblower’s concerns. Bejar believes that the company could have made Instagram safer for teens but chose not to make necessary changes.

Former Meta employees have also asked the company to set goals for reducing harmful content and creating sustainable incentives to work on these issues. Meanwhile, Béjart has met with British politicians, regulators, and activists, including Ian Russell, Molly’s father.

Bejar has suggested a series of changes for Meta, including making it easier for users to flag unwanted content, surveying users’ experiences regularly, and facilitating the reporting of negative experiences with Meta’s services.

For those in need of support, various crisis support services and helplines are available in different regions. The Samaritans, National Suicide Prevention Lifeline, and other international helplines are accessible for anyone in need of assistance.

Source: www.theguardian.com