Facebook requests U.S. Supreme Court to drop fraud lawsuit regarding Cambridge Analytica scandal

The U.S. Supreme Court discussed Meta’s Facebook’s attempt to dismiss a federal securities fraud lawsuit brought by shareholders. The lawsuit accuses the social media platform of deceiving users about its misuse of user data.

The Supreme Court heard arguments in Facebook’s appeal against a lower court’s decision allowing a 2018 class action lawsuit by Amalgamated Bank to move forward. The lawsuit aims to recover lost value of investors’ Facebook stock. Another lawsuit filed this month involves Nvidia, where litigants accuse the company of securities fraud, potentially making accountability more challenging.

The key issue is whether Facebook broke the law by not disclosing previous data breaches in its risk disclosures, portraying the risks as hypothetical.

Facebook argued in its brief to the Supreme Court that reasonable investors would see risk disclosures as forward-looking statements, eliminating the need to disclose previous risks that materialized.

Justice Elena Kagan and Justice Samuel Alito raised questions during the hearing, asserting that risk assessment is always forward-looking.

The plaintiffs accused Facebook of violating the Securities Exchange Act by misleading investors about a 2015 data breach involving Cambridge Analytica. The case was initially dismissed, but the U.S. 9th Circuit Court of Appeals reinstated it.

The Cambridge Analytica scandal led to various investigations and legal actions against Facebook. The Supreme Court is expected to reach a decision by June.

Despite the conservative majority on the Supreme Court, there are differing views on how investors interpret forward-looking risk disclosures.

Skip past newsletter promotions

Facebook’s stock price dropped after reports in 2018 regarding the misuse of user data by Cambridge Analytica in connection with President Donald Trump’s 2016 campaign.

Source: www.theguardian.com

Cambridge exhibition showcases AI technology that gives voice to deceased animals

Don’t worry if the salted bodies, partial skeletons, and taxidermied carcasses that fill the museum seem a little, well, quiet. In the latest coup in artificial intelligence, dead animals will be given a new lease of life, sharing their stories and even their experiences of the afterlife.

More than a dozen exhibits, from American cockroaches and dodo remains to a stuffed red panda and a fin whale skeleton, will be given the gift of conversation on Tuesday for a month-long project at the University of Cambridge Museum of Zoology.

Dead creatures and models with personalities and accents can communicate by voice or text through visitors’ mobile phones. This technology allows animals to describe their time on Earth and the challenges they have faced in the hope of reversing apathy towards the biodiversity crisis.

“Museums use AI in many ways, but we think this is the first application where we’re talking from an object perspective,” said Jack Ashby, the museum’s assistant director. “Part of the experiment is to see if giving these animals their own voices will make people think differently about them. Giving cockroaches a voice will change the public’s perception of them. Is it possible?”




A fin whale skeleton hangs from the museum’s roof. Photo: University of Cambridge

This project was conceived by natural perspectiveis a company building AI models to strengthen the connection between people and the natural world. For each exhibit, the AI includes specific details about where the specimen lived, its natural environment, how it arrived in the collection, and all available information about the species it represents.

The exhibits change their tone and words to suit the age of the person they are talking to, allowing them to converse in over 20 languages, including Spanish and Japanese. The platypus’s cry is Australian-like, the red panda’s call is slightly Himalayan-like, and the mallard’s call is British-like. Through live conversations with the exhibits, Ashby hopes visitors will learn more than can be written on the labels on the specimens.

As part of the project, the conversations visitors have with exhibits will be analyzed to better understand the information visitors are looking for in specimens. The AI suggests a variety of questions for the fin whales, such as “Tell me about life in the open ocean,” but visitors can ask whatever they like.

“When you talk to these animals, you really get a sense of their personalities. It’s a very strange experience,” Ashby said. “I started by asking questions like, “Where did you live?’ and “How did you die?’ but eventually I asked more human questions. Tanda. ”




Mallard ducks have a British accent due to AI. Photo: University of Cambridge

The museum’s dodo, one of the world’s most complete specimens, fed on fruit, seeds and the occasional small invertebrate in Mauritius, explains how its strong, curved beak is perfect for splitting tough fruit. I explained what it was. Tambaracock tree.

The AI-enhanced exhibit also shared views on whether humans should try to revive the species through cloning. “Even with advanced technology, the dodo’s return will require not only our DNA, but also Mauritius’ delicate ecosystem that supported our species,” the group said. . “This is a poignant reminder that the essence of all life goes beyond our genetic code and is intricately woven into our natural habitats.”

A similar level of obvious care was given to the fin whale skeleton that hangs from the museum’s roof. When I asked him about the most famous person he had ever met, he admitted that in his lifetime he had never had the opportunity to meet anyone as “famous” as humans see them. “But,” the AI-powered skeleton continued, “I would like to think that anyone who stands below me and feels awe and love for the natural world is important.”

Source: www.theguardian.com