Ofcom Calls on Social Media Platforms to Combat Fraud and Curb Online ‘Pile-Ons’

New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.

Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.

The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.


Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.

The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.

These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.

While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.

The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.

“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.

Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.


Melanie Dawes, Ofcom’s chief executive. Photo: Zuma Press Inc/Alamy

“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”

Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.

These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.

Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.

Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”

Source: www.theguardian.com

Crypto Mogul Do Kwon Admits Guilt in Fraud Linked to $400 Billion Market Collapse

Do Kwon, the South Korean entrepreneur behind two cryptocurrencies that were responsible for an estimated $400 billion loss in 2022 and caused significant market turbulence, pleaded guilty to two counts of fraud and wire fraud in a US court on Tuesday.

At 33 years old, Kwon co-founded Terraform Labs in Singapore and was the creator of the Terrausd and Luna currencies. He appeared in a federal court hearing in New York, having initially pleaded not guilty in January to nine charges, which include securities fraud, wire fraud, merchandise fraud, and conspiracy to commit money laundering.

Kwon was accused of deceiving investors about Terrausd in 2021—a Stablecoin intended to maintain a value equivalent to one US dollar—leading him to plead guilty to two counts under a plea agreement with Manhattan prosecutors.

He could face a maximum of 25 years in prison when Judge Engelmeyer sentences him on December 11. However, prosecutor Kimberly Ravener noted that Kwon has agreed to a prison term of no more than 12 years if he takes responsibility for his actions. He has been in custody since his extradition from Montenegro late last year.

Kwon is among several cryptocurrency executives facing federal charges after the 2022 downturn in digital token prices led to the collapse of numerous businesses. Sam Bankman-Fried, the founder of FTX—the largest crypto exchange in the US—was sentenced to 25 years in prison in 2024.


Prosecutors allege that when Terrausd dipped below $1 in May 2021, Kwon misled investors, claiming that the “Terra Protocol,” a computer algorithm, had restored the coin’s value. Instead, he allegedly arranged for the covert purchase of millions of dollars in tokens to artificially inflate the price through high-frequency trading companies.

These false representations reportedly misled retail and institutional investors, enticing them to invest in Terraform products and escalate the value of Luna.

During the court proceedings, Kwon expressed remorse for his actions.

“I made misleading statements about why it regained its value without disclosing the involvement of the trading company in restoring that PEG,” Kwon stated. “What I did was wrong.”

Skip past newsletter promotions

Kwon has also agreed to pay $80 million in civil penalties in 2024 and is prohibited from engaging in crypto trading as part of a $4.555 billion settlement with the U.S. Securities and Exchange Commission.

Additionally, he faces charges in South Korea. As part of his plea agreement, prosecutors indicated they would not oppose his potential transfer to serve his sentence overseas after completing his time in the US, Ravener stated.

Source: www.theguardian.com

Thousands of UK University Students Use AI to Combat Fraud

In recent years, a substantial number of university students in the UK have been identified for misusing ChatGPT and similar AI tools. While traditional forms of plagiarism appear to be declining significantly, a Guardian investigation reveals concerning trends.

The investigation into academic integrity violations has indicated a rise to 5.1 cases per 1,000 students, with nearly 7,000 verified instances of fraud involving AI tools reported between 2023 and 2024. This marks an increase from just 1.6 cases per 1,000 students in the previous academic year, 2022-23.

Experts anticipate these figures will increase further this year, estimating potential cases could reach around 7.5 per 1,000 students, although reported cases likely reflect only a fraction of the actual instances.

This data underscores the rapidly changing landscape for universities as they strive to update evaluation methods in response to emerging technologies like ChatGPT and other AI-driven writing tools.

Before the advent of generative AI in the 2019-20 academic year, plagiarism accounted for nearly two-thirds of all academic misconduct. Plagiarism rates surged during the pandemic as many assessments transitioned online. However, with advances in AI tools, the character of academic fraud has evolved.

Predictions suggest that for the current academic year, confirmed instances of traditional plagiarism could decrease from 19 per 15.2 to 15.2, falling to approximately 8.5 per 1,000 students.

A set of charts displaying verified fraud cases per 1,000 students. Plagiarism is expected to rise from 2019-20 to 2022-23 and then revert, while AI-related fraud is anticipated to rise from 2022-23 to a level comparable to plagiarism. “Other fraud” shows stability.

The Guardian reached out to 155 universities via the Freedom of Information Act, which mandates disclosure of confirmed cases of academic misconduct, including plagiarism and AI-related fraud over the past five years. Out of these, 131 responded; however, not all universities had comprehensive records of annual or fraud categories.

More than 27% of responding institutions did not categorize AI misuse as a distinct form of fraud in 2023-24, indicating a lack of acknowledgment of the issue within the sector.

Numerous instances of AI-related fraud may go undetected. A survey by the Institute for Higher Education Policy revealed that 88% of students admitted to utilizing AI for evaluations. Additionally, last year, researchers at the University of Reading tested their rating system and found that AI-generated submissions went undetected 94% of the time.

Dr. Peter Scarf, an associate professor of psychology at the University of Reading and co-author of the research, noted that while methods of cheating have existed for a long time, the education sector must adapt to the challenges posed by AI, creating a fundamentally different issue.

He remarked, “I believe the reality we see reflects merely the tip of the iceberg. AI detection operates differently from traditional plagiarism checks, making it almost impossible to prove misuse. If an AI detector indicates AI usage, it’s challenging to counter that claim.”

“We cannot merely transition all student assessments to in-person formats. Simultaneously, the sector must recognize that students are employing AI even if it goes unreported or unnoticed.”

Students keen to avoid AI detection have numerous online resources at their disposal. The Guardian found various TikTok videos that promote AI paraphrasing and essay writing tools tailored for students, which can circumvent typical university AI detection systems by effectively “humanizing” text produced by ChatGPT.

Dr. Thomas Lancaster, a researcher of academic integrity at Imperial College London, stated, “It’s exceedingly challenging to substantiate claims of AI misuse among students who are adept at manipulating the generated content.”

Harvey*, who has just completed his Business Management degree at Northern University, shared with the Guardian that he utilized AI for brainstorming ideas and structuring tasks while also incorporating references, noting that many of his peers have similarly engaged with these technologies.

“When I started university, ChatGPT was already available, making its presence constant in my experience,” he explained. “I don’t believe many students use AI simply to replicate text. Most see it as a tool for generating ideas and inspiration. Any content I derive from it, I thoroughly rework in my style.”

“I know people who, after using AI, enhance and adapt the output through various methods to make it sound human-authored.”

Amelia*, who has just completed her first year in a music business program at a university in the southwest, also acknowledged using AI for summarization and brainstorming, highlighting the tool’s significant benefits for students with learning difficulties. “A friend of mine uses AI for structuring essays rather than relying solely on it to write or study, integrating her own viewpoints and conducting some research. She has dyslexia.”

Science and Technology Secretary Peter Kyle recently emphasized to the Guardian the importance of leveraging AI to “level the playing field” for children with dyslexia.

It appears that technology companies see students as a key demographic for their AI solutions. Google is now providing free upgrades to university students in the US and Canada for 15 months to its Gemini Tools.

Lancaster stated, “Assessment methods at the university level may feel meaningless to students, even if educators have valid reasons for their structure. Understanding the reasons behind specific tasks and engaging students in the assessment design process is crucial.”

“There are frequent discussions about the merits of increasing the number of examinations instead of written assessments, yet the value of retaining knowledge through memorization diminishes yearly. Emphasis should be on fostering communication skills and interpersonal abilities—elements that are not easily replicable by AI and crucial for success in the workplace.”

A government spokesperson stated that over £187 million has been invested in the national skills program, with guidelines issued on AI utilization within schools.

They affirmed: “Generative AI has immense potential to revolutionize education, presenting exciting prospects for growth during transitional periods. However, integrating AI into education, learning, and assessment necessitates careful consideration, and universities must determine how to harness its advantages while mitigating risks to prepare for future employment.”

*Name has been changed.

Source: www.theguardian.com

Musk-Linked Group Donates $5 Million to Investigate Voter Fraud, Finds Nothing | US Election Integrity

In May 2024, a Fantastic ads Going viral on social media, “There are real cases of fraud and abuse across the country. [election] “The system that erodes our trust.” advertisement The “whistleblower” who shared evidence of election fraud has vowed that he will “reward on payments from the $5 million fund.”

This reward was courtesy of the group that had just been announced. Fair Election Funddocumented documents show that there is a deep connection to Elon Musk’s political network.

The fair election fund “emphasis on these cases” to share their stories with “a large portion of the group’s budget is dedicated to whistleblower payments” and “affective pay and earned.” We have pledged to launch a “media campaign.”

It followed after that Another ad It was run in swing states during the Olympics, telling viewers to share evidence of election fraud, saying, “You might qualify for compensation.”

Despite the group’s high-profile, deep pocket supporters and favorable bounty offers, no evidence of voters or election fraud was revealed. Instead, the group incorporates a series of unrelated detours into tangential areas like third-party voting access, and efforts to reveal fraud have concluded that many research, court decisions, and bipartisan investigations have concluded. I’ve reaffirmed that I’m there. Voter fraud is extremely rare.

Lack of evidence has not stopped Republicans in Congress or in state legislatures We continue to promote restrictive voting methods It is intended to address this phantom threat. Meanwhile, Musk argues that “fraud” justifies his efforts to cut government operations; Similarly, it does not reveal much evidence.

The Fair Election Fund is now radio silent. SiteMap data shows that the website has not been updated since October and that the group’s X/Twitter account has not been posted since November. Group’s SpokesmanFormer national team member Doug Collins has appeared. Trump’s Veterans Secretaryand still is Leading the government’s ethics bureau.

Close relationship with the world’s wealthiest man

The Fair Election Fund is the fictitious name of another 501(c)(4) nonprofit organization, and the documented documentation is revealed and operated within a network run by Musk’s top political advisors. You can do it. The group received funds from the same dark money vehicle that Musk used to guide his political spending, and also routed the funds to another musk-backed nonprofit.

The group is now housed in a nonprofit organization called the Interstate, and previously known as the fund. It was formed on January 3, 2023, a nonprofit organization. We raised $8,226,000 from a single donation in 2023.

The group is led by Victoria “Tori” Sax. And to the RepublicThe group was also formed in January 2023. support Includes Desantis funding, including Presidential bid for Ron Desantis Private Jet and Host a semi-campaign event.

The naming of two sax-led groups, and naming them for the Republic and it’s for funding – and the timing of their creation in January 2023 was originally the group currently housed a fair election fund. It suggests that it was intended to support operations. Mask was the first to support him.

Sachs’ involvement continues until 2024, and her name will appear on record This was accompanied by the purchase of the Fair Election Fund’s broadcast.

Musk has been like that since 2022 Secretly channel his political spending Through a dark money nonprofit organization called Building America’s Future. The group is run by General and Phil Cox, two Republican operatives involved in the failed presidential bid for DeSantis. I’ll give advice to Musk now. Building the future of America It is reportedly In 2024, we supported the Fair Election Fund. That too Half provided of Republic’s comprehensive funding in 2023.

Ron DeSantis, whose presidential bid for the Florida Governor, was endorsed by the Republic Group. Photo: Cristóbal Herrera/EPA

The Fair Election Fund has other connections with Musk’s advisors who will lead the future of America. Cox’s digital marketing company IMGE LLC, this Serves several groups Musk-backed buildings in America’s future universe Manage the Fair Election Fund Facebook pageand IMGE Employees It seems to be responsible Articles on the Fair Election Fund website.

The Fair Election Fund/Interstate Priorities also served as a conduit to support other musk-backed groups. Group’s 2023 Tax Return It shows that he has won a $1,550,000 grant to citizens for his sanity. Masks were funded in 2022 by building the future of Americaand aired racist and transphobic ads of that election cycle. The grant created almost entire citizens for sanity Funding for 2023.

During the 2024 election cycle, Musk released at least $277 million in political contributions to the super PACs he worked to elect President Trump and other Republicans. I don’t know how much he gave to other politically active groups disguised as donors.

Detours to third-party voting access

The fair election fund’s goal of exposing election fraud at first glance seemed to have no significance.

Of the $5 million fund, the group announced $75,000 in payments of “prizes” and released it $50,000 July 2024 and $25,000 September 2024. Fair Election Fund It was promised While we will not “emphasis” the narratives of election fraud collected through these payments via “active payments and acquired media campaigns,” we suggest that any evidence generated is consequential or reliable. Not there.

Instead, the group detoured in July 2024: $175,000 advertisement “Blitz” targeting North Carolina Election Commission (NCSBE) members delay Third-party presidential candidates Cornell West and Robert F. Kennedy JR will be on the poll. Back then, Republicans and their allies believed West and Kennedy would do. I’ll act as a spoiler to help Trumpby sucking up left-leaning votes away from the Democratic presidential candidate.

Ironically, the NCSBE delayed decisions regarding Western and Kennedy’s eligibility and is based on evidence. The petition was obtained through fraudulent means – Concerns that appear to coincide with the Fair Election Fund’s mission to expose election fraud.

Fair Election Fund advertisement NCSBE Democrats declared “threatening your right to vote” and provided compensation for evidence of members’ “shady backroom deals.” The group too Projected image I drove next to the NCSBE building. Mobile sign Around the agency’s headquarters.

A fair election fund was also implemented Digital Advertising North Carolina features black voters, some of which areNo African American voices heard“, others sayEquality, support inclusion, support [Cornel West’s] Justice for all parties“. The group has promoted similar efforts in states such as Michigan.

Mark Elias, a Democratic lawyer who tried to stay west of votes in North Carolina and elsewhere, was a frequent target for the group. October 2024, Group announcement Performing six-figure ad purchases to “troll” Elias. The ads included mobile billboards around Elias Law Group Office and full-page ads for the Washington Post. “We’ve broken Mark Elias and his racist voter suppression lawsuit. Cornell West, but the fair election fund has stopped him.”

The fair election fund was then directed towards a series of efforts to chase other trending right-wing conspiracy theories.

For example, fair election funds over the summer are Online Funding Platform ActBlueclaiming he found it.”60,000 potential contradictionsIn ActBlue Facilitation’s contribution to the Biden-Harris campaign, based on a survey conducted “from late July to early August.” The group is “I spent $250,000 About these initial findings” – Amazing

Source: www.theguardian.com

AI system used to detect UK benefits fraud exposed for bias | Universal Credit

The Guardian has uncovered that artificial intelligence systems utilized by the UK government to identify welfare fraud exhibit bias based on individuals’ age, disability, marital status, and nationality.

A review of a machine learning program used to analyze numerous Universal Credit payment claims across the UK revealed that certain groups were mistakenly targeted more frequently than others.

This revelation came from documents published under the Freedom of Information Act by the Department for Work and Pensions (DWP). A “fairness analysis” conducted in February of this year uncovered a significant discrepancy in outcomes within the Universal Credit Advance automated system.

Despite previous claims by the DWP that the AI system had no discrimination concerns, the emergence of this bias raises important questions about its impact on customers.

Concerns have been raised by activists regarding the potential harm caused by the government’s policies and the need for transparency in the use of AI systems.

The DWP has been urged to adopt a more cautious approach and cease the deployment of tools that pose a risk of harm to marginalized groups.

The discovery of disparities in fraud risk assessment by automated systems may lead to increased scrutiny of the government’s use of AI, emphasizing the need for greater transparency.

The UK public sector employs a significant number of automated tools, with only a fraction being officially registered.

The lack of transparency in the use of AI systems by government departments has raised concerns about potential misuse and manipulation by malicious actors.

The DWP has stated that their AI tools do not replace human judgment and that caseworkers evaluate all available information when making decisions related to benefits fraud.

Source: www.theguardian.com

UK police boss warns that AI is on the rise in sextortion, fraud, and child abuse cases

A senior police official has issued a warning that pedophiles, fraudsters, hackers, and criminals are now utilizing artificial intelligence (AI) to target victims in increasingly harmful ways.

According to Alex Murray, the National Police’s head of AI, criminals are taking advantage of the expanding accessibility of AI technology, necessitating swift action by law enforcement to combat these new threats.

Murray stated, “Throughout the history of policing, criminals have shown ingenuity and will leverage any available resource to commit crimes. They are now using AI to facilitate criminal activities.”

He further emphasized that AI is being used for criminal activities on both a global organized crime level and on an individual level, demonstrating the versatility of this technology in facilitating crime.

During the recent National Police Chiefs’ Council meeting in London, Mr. Murray highlighted a new AI-driven fraud scheme where deepfake technology was utilized to impersonate company executives and deceive colleagues into transferring significant sums of money.

Instances of similar fraudulent activities have been reported globally, with concern growing over the increasing sophistication of AI-enabled crimes.

The use of AI by criminals extends beyond fraud, with pedophiles using generative AI to produce illicit images and videos depicting child sexual abuse, a distressing trend that law enforcement agencies are working diligently to combat.

Additionally, hackers are employing AI to identify vulnerabilities in digital systems, providing insights for cyberattacks, highlighting the wide range of potential threats posed by the criminal use of AI technology.

Furthermore, concerns have been raised regarding the radicalization potential of AI-powered chatbots, with evidence suggesting that these bots could be used to encourage individuals to engage in criminal activities including terrorism.

As AI technologies continue to advance and become more accessible, law enforcement agencies must adapt rapidly to confront the evolving landscape of AI-enabled crimes and prevent a surge in criminal activities using AI by the year 2029.

Source: www.theguardian.com

Facebook requests U.S. Supreme Court to drop fraud lawsuit regarding Cambridge Analytica scandal

The U.S. Supreme Court discussed Meta’s Facebook’s attempt to dismiss a federal securities fraud lawsuit brought by shareholders. The lawsuit accuses the social media platform of deceiving users about its misuse of user data.

The Supreme Court heard arguments in Facebook’s appeal against a lower court’s decision allowing a 2018 class action lawsuit by Amalgamated Bank to move forward. The lawsuit aims to recover lost value of investors’ Facebook stock. Another lawsuit filed this month involves Nvidia, where litigants accuse the company of securities fraud, potentially making accountability more challenging.

The key issue is whether Facebook broke the law by not disclosing previous data breaches in its risk disclosures, portraying the risks as hypothetical.

Facebook argued in its brief to the Supreme Court that reasonable investors would see risk disclosures as forward-looking statements, eliminating the need to disclose previous risks that materialized.

Justice Elena Kagan and Justice Samuel Alito raised questions during the hearing, asserting that risk assessment is always forward-looking.

The plaintiffs accused Facebook of violating the Securities Exchange Act by misleading investors about a 2015 data breach involving Cambridge Analytica. The case was initially dismissed, but the U.S. 9th Circuit Court of Appeals reinstated it.

The Cambridge Analytica scandal led to various investigations and legal actions against Facebook. The Supreme Court is expected to reach a decision by June.

Despite the conservative majority on the Supreme Court, there are differing views on how investors interpret forward-looking risk disclosures.

Skip past newsletter promotions

Facebook’s stock price dropped after reports in 2018 regarding the misuse of user data by Cambridge Analytica in connection with President Donald Trump’s 2016 campaign.

Source: www.theguardian.com

AI Fraud is a Growing Issue in Education, But Teachers Shouldn’t Lose Hope | Opinion Piece by John Norton

IThe start of term is fast approaching. Parents are starting to worry about packed lunches, uniforms, and textbooks. School leavers heading to university are wondering what welcome week will be like for new students. And some professors, especially in the humanities, are anxiously wondering how to handle students who are already more adept at Large Language Models (LLMs) than they are.

They have good reason to be worried. Ian Bogost, a professor of film and media, said: and He studied Computer Science at Washington University in St. Louis. it is“If the first year of AI College ended with a sense of disappointment, the situation has now descended into absurdity. Teachers struggle to continue teaching while wondering whether they are grading students or computers. Meanwhile, the arms race in AI cheating and detection continues unabated.”

As expected, the arms race is already intensifying. The Wall Street Journal Recently reported “OpenAI has a way to reliably detect if someone is using ChatGPT to write an essay or research paper, but the company has not disclosed it, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal has infuriated a sector of academia that imagining admirably that there must be a technological solution to this “cheating” problem. Apparently they have not read the Association for Computing Machinery's report on “cheating”. Statement of principles for developing generative AI content detection systemsstates that “reliably detecting the output of a generative AI system without an embedded watermark is beyond the current state of the art and is unlikely to change within any foreseeable timeframe.” Digital watermarks are useful, but they can also cause problems.

The LLM is a particularly pressing problem for the humanities because the essay is a critical pedagogical tool in teaching students how to research, think, and write. Perhaps more importantly, the essay also plays a central role in grading. Unfortunately, the LLM threatens to make this venerable pedagogy unviable. And there is no technological solution in sight.

The good news is that the problem is not insurmountable if educators in these fields are willing to rethink and adapt their teaching methods to fit new realities. Alternative pedagogies are available. But it will require two changes of thinking, if not a change of heart.

First, law graduates, like the well-known psychologist from Berkeley, Alison Gopnik says They are “cultural technologies”, just like writing, printing, libraries, internet searches, etc. In other words, they are tools used by humans. AugmentIt's not an exchange.

Second, and perhaps more importantly, the importance of writing needs to be reinstated in students' minds. processI think E.M. Forster once said that there are two kinds of writers: those who know their ideas and write them, and those who find their ideas by trying to write. The majority of humanity belongs to the latter. That's why the process of writing is so good for the intellect. Writing teaches you the skills to come up with a coherent line of argument, select relevant evidence, find useful sources and inspiration, and most importantly, express yourself in readable, clear prose. For many, that's not easy or natural. That's why students turn to ChatGPT even when they're asked to write 500 words to introduce themselves to their classmates.

Josh Blake, an American scholar, Writes intelligently about our relationship with AI Rather than trying to “integrate” writing into the classroom, I believe it is worth making the value of writing as an intellectual activity fully clear to students. you If you think about it, naturally they would be interested in outsourcing the labor to law students. And if writing (or any other job) is really just about the deliverables, why not? If the means to an end aren't important, why not outsource it?

Ultimately, the problems that LLMs pose to academia can be solved, but it will require new thinking and different approaches to teaching and learning in some areas. The bigger problem is the slow pace at which universities move. I know this from experience. In October 1995, the American scholar Eli Noam published a very insightful article: “The bleak future of electronics and universities” – in ScienceBetween 1998 and 2001, I asked every vice-chancellor and senior university leader I met in the UK what they thought about this.

Still, things have improved since then: at least now everyone knows about ChatGPT.

What I'm Reading

Online Crime
Ed West has an interesting blog post Man found guilty of online posts made during unrest following Southport stabbingIt highlights the contradictions in the British judicial system.

Ruth Bannon
Here is an interesting interview Boston Review Documentary filmmaker Errol Morris Discusses Steve Bannon's Dangerous 'Dharma' his consciousness of being part of the inevitable unfolding of history;

Online forgetting
A sobering article by Neil Firth MIT Technology Review On Efforts to preserve digital history for future generations In an ever-growing universe of data.

Source: www.theguardian.com

Researcher working on promising Alzheimer’s drug facing charges of research fraud

Summary

  • A neuroscientist who helped develop a potential treatment for Alzheimer’s disease has been indicted on fraud charges.
  • The charges relate to allegations that the scientists fabricated research images and data that they may have used to win grant funding.
  • Manipulation of research images is a growing concern in the scientific community.

A neuroscientist who contributed to the development of a potential Alzheimer’s disease treatment is facing fraud charges after a federal grand jury indictment on Thursday.

The indictment alleges that Wang Huaoyang, a professor of medicine at the City University of New York, engaged in fraudulent activities, including falsifying research images and data to secure grant funding from the National Institutes of Health.

Wang worked with Cassava Sciences, a pharmaceutical company based in Austin, Texas, on the development of simufilam, a drug candidate for Alzheimer’s disease. The indictment states that Wang received around $16 million in grant funding from Cassava for early-stage drug development.

The indictment accuses Wang of fraud against the United States, wire fraud, and making false statements. It claims that Wang manipulated images of Western blots, a laboratory technique used to detect proteins, to support his research and grant applications.

The indictment also suggests that Wang provided false information to scientific journals to support his research on symphyllum, a drug currently in late-stage clinical trials.

Despite the charges, Wang did not respond to requests for comment. His research has faced scrutiny in the past, leading to retractions of multiple studies and an investigation by CUNY.

Cassava Sciences confirmed that Wang was not involved in their latest clinical trials and emphasized that his research focused on early-stage drug development.

The scientific community has expressed growing concerns about research misconduct and the manipulation of data and images. Instances of research misconduct, such as the allegations against Wang, have led to retractions of studies and raised questions about the integrity of scientific research.

CUNY has stated that they will cooperate fully with the federal investigation into Wang’s alleged misconduct until the matter is resolved. The university acknowledges the seriousness of the charges and the impact they may have on the scientific community.

The case highlights the importance of maintaining integrity and transparency in scientific research to ensure the credibility and validity of scientific discoveries.

Retraction Watch has reported on the retraction of several academic papers authored by Wang, further underscoring the need for accountability and ethical practices in scientific research.

Source: www.nbcnews.com

Students Implicated in Cyber Fraud After Police Discover Involvement in Massive Phishing Site

Police have uncovered a disturbing trend among university students, who are resorting to cyber fraud to boost their income. They have managed to infiltrate a large phishing site on the dark web that has defrauded tens of thousands of individuals.

The site, known as LabHost, has been operational since 2021 and serves as a hub for cyber fraud, enabling users to create realistic-looking websites mimicking reputable companies like major banks. It has ensnared 70,000 users globally, including 70,000 individuals in the UK.

Victims unknowingly provided sensitive information, which was then used to siphon money from their accounts. The perpetrators behind the site profited by selling this stolen data on the dark web to other fraudsters.

According to the Metropolitan Police, the primary victims fall within the 25-44 age bracket, with a significant portion of their activities carried out online.

Law enforcement authorities have apprehended one of the alleged masterminds behind the site, along with 36 other suspects detained in the UK and abroad. The arrests were made at various airports in Manchester, Luton, Essex, and London.

British police are facing mounting pressure to demonstrate their effectiveness in combating the rising tide of cyber fraud.

Despite the relatively small impact of dismantling this particular site, the police intend to dismantle additional cyber fraud operations to undermine the confidence of criminals who believe they can act with impunity.

While fraud and cybercrime present considerable challenges for law enforcement agencies, they often compete for resources with other policing priorities, such as safeguarding children and enhancing women’s safety.

LabHost managed to amass significant amounts of sensitive data, including 480,000 debit or credit card numbers and 64,000 PIN numbers, generating over £1 million in membership fees from 2,000 individuals who paid in cryptocurrency.

The company lured users with tutorial videos on committing crimes using the site and on utilizing new consumer products. It promised quick installation of software in five minutes and offered “customer service” in case of any issues.

DI Oliver Richter noted the shift in cyber fraud from requiring technical skills like coding to now being accessible to individuals ranging from late teens to late 20s, many of whom are college students.

He expressed concern that these users may not fully grasp the risks and consequences of their actions, assuming anonymity and ease of operation.

Following the dismantling of the site, 800 users received warnings that the police were aware of their activities.

Detective Inspector Helen Rance, head of the Metropolitan Police’s cybercrime unit, described the LabHost bust as a sophisticated operation targeting those who have commercialized fraudulent activities. She highlighted collaboration with 17 factions globally, both in the public and private sectors.

She emphasized the success of penetrating the service, identifying the perpetrators, and understanding the scale of their illicit operations.

Source: www.theguardian.com

Key takeaways from the initial week of Mike Lynch’s fraud trial in the US | Autonomy

Mike Lynch, known as ‘Britain’s Bill Gates’ and the top technology entrepreneur in Britain, reached the pinnacle of his career when he transformed his software company into an $11bn (£8.6bn) acquisition by a Silicon Valley giant. More than a dozen years later, the acquisition has become the focus of a trial in San Francisco that began last Monday.

Lynch is facing 16 charges of wire fraud, securities fraud, and conspiracy by U.S. authorities, alleging that Hewlett-Packard’s purchase of Autonomy was based on deceitful information. If found guilty, he could be sentenced to up to 25 years in prison. Lynch has pleaded not guilty.


The trial will center on the events of 2011 when HP acquired Autonomy. In the coming weeks, jurors will hear from numerous witnesses in a courtroom directly above the former Autonomy skyscraper site in San Francisco.

Once hailed as “Britain’s Bill Gates,” Lynch spent the first week of his trial quietly listening as federal prosecutors targeted his former empire. He occasionally interacted with his lawyer or worked on his laptop, at times wearing a smile.

1. 2011 Revisited

In 2011, David Cameron was still in office, Barack Obama was president, and movie buffs were enthralled by the final Harry Potter film.

Lynch has consistently claimed that HP mishandled the Autonomy acquisition, leading to its downfall. However, Judge Charles Breyer ruled that the trial’s focus should not include the aftermath of the deal.

Explaining financial transactions and complex arguments from over a decade ago to a new jury presents a significant challenge.

The trial started with the prosecution highlighting a crucial meeting in early 2011 where Lynch allegedly misled HP executives about Autonomy’s success, leading to the $11 billion fraud accusation.

The defense painted Lynch as a tough but brilliant inventor who delegated tasks to talented managers, minimizing his involvement in daily operations.

2. Simplifying the Complex

Government prosecutors accused Lynch of repeatedly lying to investors and auditors, orchestrating a multi-year fraud through deceptive accounting practices.

As the trial progresses, Lynch’s team plans to portray him as a hands-off leader who was unfairly blamed for HP’s struggles and the Autonomy deal.

Source: www.theguardian.com

“British tech company accused of being ‘controlling’ as Mike Lynch fraud trial continues into second day” | Autonomy

British entrepreneur Mike Lynch faced arrest on the first day of his criminal trial, where prosecutors portrayed him as a controlling boss who orchestrated a massive fraud. Lynch is set to appear in court in San Francisco on Tuesday.

Co-founder of Autonomy, Lynch is accused of inflating the software company’s sales, misleading auditors, analysts, and regulators, and threatening those who raised concerns before its acquisition by Hewlett-Packard (HP) in 2011.

Lynch’s lawyers plan to have him testify once prosecutors complete their case against him. He has denied all allegations of wrongdoing and faces up to 25 years in prison if convicted.

A deal by HP to acquire Autonomy for $11.1 billion soured when HP reduced the purchase price by $8.8 billion due to alleged accounting irregularities, omissions, and misstatements in the business.

As the trial commenced, prosecutors called on Ganesh Vaidyanathan, Autonomy’s former head of accounting, as the first witness to testify about accounting issues raised in 2010.

Assistant U.S. Attorney Adam Reeves argued that Lynch presented Autonomy as a successful company to HP but that its financial statements were false and misleading due to accounting tricks and concealing hardware sales.

Chamberlain, Autonomy’s financial director, also pleaded not guilty to charges related to falsifying documents and misleading auditors, with his attorney suggesting he was a pawn caught in a battle between giants.

Lynch alleges Autonomy’s poor performance post-acquisition was due to mismanagement by HP, not wrongdoing before the acquisition, as he spent time preparing for trial under house arrest.

Extradited from Britain to the U.S. last year, Lynch posted bail and wears a GPS tag on his ankle under 24-hour guard surveillance.

Source: www.theguardian.com

Today marks the start of the criminal fraud trial of British technology mogul Mike Lynch | Autonomy

The criminal fraud trial of the British technology mogul once referred to as “Britain’s Bill Gates” is set to commence today in San Francisco.

Mike Lynch, the co-founder of British software company Autonomy, stands accused of artificially boosting the software company’s sales, deceiving auditors, analysts, and regulators. In 2011, before Hewlett-Packard’s significant takeover of the company, he even threatened those who raised concerns.


He has consistently denied any wrongdoing and maintains his innocence. If found guilty, he could face up to 25 years in prison.

HP purchased Autonomy in an $11.1bn (£8.72bn) deal to enhance its software business. However, just a year later, they reduced the purchase price by $8.8 billion, citing accounting irregularities and misstatements in the business.

In 2019, Lynch was indicted by a federal grand jury on 17 charges, including wire fraud, securities fraud, and conspiracy.

Despite past accolades, including an OBE in 2006 for his contributions to enterprise and an appointment to Prime Minister David Cameron’s Science and Technology Council in 2011, Lynch’s current situation is dire. He has spent the past year under house arrest preparing for trial.

Lynch was extradited from Britain to the US last May. After posting $100 million bail, he was required to wear a GPS ankle tag and be under constant surveillance by armed guards.

Skip past newsletter promotions

In a first-time allowance back in November, he could leave the luxurious San Francisco compound where he is based daily between 9 am and 9 pm, albeit with strict conditions.

Source: www.theguardian.com

Concerned about AI voice fraud? Don’t worry, I have a guaranteed solution- Zoe Williams

a A friend of mine was recently fooled by a fraudulent email purporting to be from her middle school daughter and transferred £100 into her account to cover a mysterious situation, which she described as a very time-sensitive and inconvenient event. That’s it.

You can imagine how the scammers managed to pull it off. Remember the everyday low-level anxiety of parents expecting bad news when their children are further away than the kitchen table? What’s more, the bad news story, which begins with a 19-year-old’s email saying, “I broke my phone,” is completely believable. All the scammer has to do is lean back.

Still, the story isn’t complete, as it neglects to ask basic questions like, “But if your phone is broken, why transfer money to someone else’s bank account?” , and for years afterward we called him a fool. He didn’t even call her number to see if he could talk to her. A 100-pound lighter was probably the best place to land. If someone tries to release his life savings, he will concentrate.

But what happens when you hear your child begging for money just like you? Who has strong enough defenses to withstand voice cloning? Members of Stop Scams UK tried to explain this to me last year. Scammers can extract the child’s voice from her TikTok account. Then all they have to do is find the parent’s phone number. I thought I had gotten the wrong end of the stick and had to piece together the message from recorded words available on social media. Good luck getting some soccer tips and some believable havoc from K-Pop, I thought. When it comes to AI, he didn’t think for 10 seconds about whether it could infer speech patterns from samples. In fact, it’s possible.

I think it’s still pretty easy to get around. Kid Machine is seeking urgent assistance. You say, “Precious and perfect being, I love you with all my heart.” Kid Machine will surely reply, “I love you too.” Why can’t we do that? A real child would claim to have been sick in the mouth. You can’t build an algorithm for this.

Zoe Williams is a columnist for the Guardian



Do you have an opinion on the issues raised in this article? Click here if you would like to email your answer of up to 300 words to be considered for publication in our email section.

Source: www.theguardian.com

Nikola’s Founder Trevor Milton Faces Four-Year Prison Sentence for Securities Fraud

Trevor Milton, the disgraced founder and former CEO of electric truck startup Nikola, has been sentenced to four years in prison for securities fraud. The ruling by Judge Edgardo Ramos of U.S. District Court in Manhattan brings an end to a years-long saga in which Nikola’s stock soared 83% at one point, only to plummet months later amid fraud charges and contract cancellations.

The sentencing hearing was postponed four times, during which time Milton remained free on $100 million bail.

In his sentencing, Ramos said each charge carries a sentence of 48 months in prison, concurrent terms, and a $1 million fine. Milton is expected to appeal the ruling, which Ramos confirmed.

Milton sobbed before sentencing, pleading with Judge Ramos for leniency in a lengthy and often confusing statement. At one point, Milton said he had stepped down as Nikola’s CEO not because of the fraud allegations, but to support his wife.

“I resigned because my wife was suffering from a life-threatening illness,” he said. in his statement, shared by Inner City Press reporter Matthew Russell Lee in a social media post X. She suffered from medical malpractice, someone else’s plasma. So I resigned because of that – not because I was a fraud. Truth matters. I chose my wife over money and power. ”

Milton, 41, was found guilty by a jury in October 2022 of one count of securities fraud and two counts of wire fraud after being found guilty of lying to investors about the development of electric trucks to inflate Nikola’s stock price. received the verdict.

At the sentencing hearing, defense attorneys said Milton never intended to defraud investors or harm anyone. Rather, he claimed he just wanted to be loved and admired like Elon Musk. Prosecutors pushed back, arguing that he repeatedly lied and targeted individual investors.

Milton faced up to 60 years in prison, although federal prosecutors recommended an 11-year sentence. The government also sought a $5 million fine, the forfeiture of the Utah ranch, and unspecified restitution to investors. The amount of restitution will be determined after Monday’s sentencing hearing.

Prosecutors in the case say Milton has been defrauding investors since 2019 by making improper statements, including that Nikola built trucks “from scratch” and developed batteries that were actually purchased elsewhere. He is accusing them of having been. There’s also the infamous Nikola marketing video where the truck appears to be driving under its own power. In reality, it was rolling down the hill.

The video sparked an independent investigation, and Milton resigned in September 2020 after a Hindenburg Research report labeled the company a fraud. The company ultimately paid a $125 million penalty in a settlement with the U.S. Securities and Exchange Commission. Nikola’s stock price plummeted, causing significant losses not only to the company but also to investors.

In the end, Mr. Nikola sought a settlement with the SEC and reimbursement of the fine, and in October a New York arbitration panel ordered Mr. Milton to pay the company $165 million.

Milton has since pleaded not guilty to the charges, but his lawyers argue there is no evidence the former CEO intended to defraud investors. Any misstatements were a result of optimism and confidence in the company, they said. last month, Milton’s lawyer said: He must be placed on probation, in part to care for his sick wife.

Milton’s decision is one of the few high-profile cases involving a technology founder. Theranos founder Elizabeth Holmes is serving an 11-year sentence after being convicted of defrauding investors in her blood testing startup. Sam Bankman Fried, founder of cryptocurrency exchange FTX and cryptocurrency trading company Alameda Research, was found guilty in November on seven counts of fraud and money laundering.

The story is unfolding…

Source: techcrunch.com