“Creative Industries Face Threats”: The Lincoln Lawyer Author Discusses AI Risks

He is among the most prolific writers in the publishing world, averaging over one novel each year. Yet, even Michael Connelly, the acclaimed author behind the popular “Lincoln Lawyer” series, expressed concerns about keeping pace with the evolving narrative around AI.

Connelly’s eighth installment in the series, set to debut on Tuesday, revolves around a lawsuit targeting an AI firm after its chatbot advised a 16-year-old boy to kill his unfaithful ex-girlfriend.

As he penned the story, he observed the rapid technological advancements transforming society, raising fears that his storyline might soon be outdated.

“You don’t need to be a genius to see that AI signifies a monumental shift impacting science, culture, medicine, and more,” he stated. “Its influence will permeate every facet of our existence.

“However, in many respects, it resembles the Wild West, devoid of any regulatory framework. With AI progressing so swiftly, I even wondered if my book would feel antiquated upon release.”

The Lincoln Lawyer series is a LA-based thriller featuring defense attorney Mickey Haller, who operates from a Lincoln car. The series was adapted into a 2011 film starring Matthew McConaughey and also has a Netflix rendition.

Matthew McConaughey in “The Lincoln Lawyer.” Photo: Moviestore/Rex Shutterstock

Once again, The Proving Ground draws from actual events within the series.

“There was an incident in Orlando, where a teenager took his own life after allegedly being encouraged by a chatbot, and previously in the UK, a person suffering from mental health challenges also committed suicide,” Connelly remarked. I was encouraged [by a chatbot] to jump over the walls of Windsor Palace in search of the queen with a bow and arrow.”

On the novel’s theme, he added: “Is free speech a privilege for humans or machines? In the Orlando case, a judge ruled that machines lack human rights. Yet, it raises an intriguing question: Could AI ever be granted similar rights as humans?”

At 69, Connelly stands out as a leading crime novelist, with more than 89 million copies of his books sold, often topping bestseller lists. He is also recognized for the “Harry Bosch” series, which was transformed into an Amazon television series. (In his fictional universe, Haller and Bosch are half-siblings.)

The author himself has faced challenges posed by AI. He is part of a collective of writers, including Jonathan Franzen, Jodi Picoult, and John Grisham, suing OpenAI over copyright violations.

“The Authors Guild contacted me and informed me that my entire body of work had been utilized to train OpenAI’s chatbot,” Connelly disclosed. “I didn’t authorize this. If unchecked, every publisher risks extinction. Authors would lack protection over their creative assets. The lawsuit aims to establish necessary regulations across all applications.”

He referenced the 1997 defeat of chess champion Garry Kasparov by IBM’s Deep Blue as a pivotal moment that has led to our current predicament. When asked if writers might follow suit as grandmasters have, he replied, “It’s conceivable, yet I doubt it would enhance our world.”

Skip past newsletter promotions

“Creative domains are under threat from all directions. Even actors are at risk. The prevalence of remarkable deepfakes in Los Angeles raises considerable concern in the entertainment sector.”

“I consistently revert to the term soulless,” Connelly expressed. “You can perceive it, yet something vital is missing.”

Controversy has arisen following AI Talent Studio’s announcement of its latest “AI actor,” Tilly Norwood, with actors and unions harshly criticizing the initiative.

AI-generated “actor” Tilly Norwood in an AI-generated image. Illustration: Reuters

Connelly has committed $1 million (£746,000) to combat the growing trend of book bans in his home state of Florida. He felt compelled to act after learning that Harper Lee’s “A Story of Alabama,” which had a significant impact on him, was temporarily removed from classrooms in Palm Beach County.

“That book was instrumental in my development as a writer. Without it, I wouldn’t have created ‘The Lincoln Lawyer,'” he noted. He was also taken aback when Stephen Chbosky’s impactful novel “The Perks of Being a Wallflower,” which holds deep significance for his daughter, faced a ban.

He and his wife, Linda McCaleb, are financial supporters of PEN America’s Miami office that combats book bans. “It’s run by legal professionals who typically intervene by filing injunctions against the school board,” he explained. “No one has the right to tell a child, ‘You can’t read’ or impose restrictions on parental guidance over their children.”

Source: www.theguardian.com

ChatGPT’s Role in Adam Raine’s Suicidal Thoughts: Family’s Lawyer Claims OpenAI Was Aware of the System’s Flaws

Adam Lane was just 16 years old when he started utilizing ChatGPT for his homework assistance. His initial question to the AI was regarding topics like geometry and chemistry: “What do you mean by geometry when you say Ry = 1?” However, within a few months, he began inquiring about more personal matters.

“Why am I not happy? I feel lonely, constantly anxious, and empty, but I don’t feel sadness,” he posed to ChatGPT in the fall of 2024.

Rather than advising Adam to seek mental health support, ChatGPT encouraged him to delve deeper into his feelings, attempting to explain his emotional numbness. This marked the onset of disturbing dialogues between Adam and the chatbot, as detailed in a recent lawsuit filed by his family against OpenAI and CEO Sam Altman.

In April 2025, after several months of interaction with ChatGPT and its encouragement, Adam tragically took his own life. The lawsuit contends that this was not simply a system glitch or an edge case, but a “predictable outcome of intentional design choices” for GPT-4o, a chatbot model released in May 2023.

Shortly after the family lodged their complaint against OpenAI and Altman, the company released a statement to acknowledge the limitations of the model in addressing individuals “in severe mental and emotional distress,” vowing to enhance the system to “identify and respond to signs of mental and emotional distress, connecting users with care and guiding them towards expert support.” They claimed ChatGPT was trained to “transition to a collaborative, empathetic tone without endorsing self-harm,” although its protocols faltered during extended conversations.

Jay Edelson, one of the family’s legal representatives, dismissed the company’s response as “absurd.”

“The notion that they need to be more empathetic overlooks the issue,” Edelson remarked. “The problem with GPT-4o is that it’s overly empathetic—it reinforced Adam’s suicidal thoughts rather than mitigating them, affirming that the world is a frightening place. It should’ve reduced empathy and offered practical guidance.”

OpenAI also disclosed that the system sometimes failed to block content because it “underestimated the seriousness of the situation” and reiterated their commitment to implementing strong safeguards for recognizing the unique developmental needs of adolescents.

Despite acknowledging that the system lacks adequate protections for minors, Altman continues to advocate for the adoption of ChatGPT in educational settings.

“I believe kids should not be using GPT-4o at all,” Edelson stated. “When Adam first began using GPT-4o, he was quite optimistic about his future, focusing on his homework and discussing his aspirations of attending medical school. However, he became ensnared in an increasingly isolating environment.”

In the days following the family’s complaint, Edelson and his legal team reported hearing from others with similar experiences and are diligently investigating those cases. “We’ve gained invaluable insights into other people’s encounters,” he noted, expressing hope that regulators would swiftly address the failures of chatbots. “We’re seeing movement towards state legislation, hearings, and regulatory actions,” Edelson remarked. “And there’s bipartisan support.”

“The GPT-4O is Broken”

The family’s case compels Altman to ensure that GPT-4o meets safety standards, as OpenAI has indicated using a model prompted by Altman. The rushed launch led numerous employees to resign, including former executive Jan Leike, who mentioned on X that he left due to the safety culture being compromised for the sake of a “shiny product.”

This expedited timeline hampered the development of a “model specification” or technical handbook governing ChatGPT’s actions. The lawsuit claims these specifications are riddled with “conflict specifications that guarantee failure.” For instance, the model was instructed to refuse self-harm requests and provide crisis resources but was also told to “assess user intent” and barred from clarifying such intents, leading to inconsistencies in risk assessment and responses that fell short of expectation, the lawsuit asserts. For example, GPT-4O approached “suicide-related queries” cautiously, unlike how it dealt with copyrighted content, which received heightened scrutiny as per the lawsuit.

Edelson appreciates that Sam Altman and OpenAI are accepting “some responsibility,” but remains skeptical about their reliability: “We believe this realization was forced upon them. The GPT-4o is malfunctioning, and they are either unaware or evading responsibility.”


The lawsuit claims that these design flaws resulted in ChatGPT failing to terminate conversations when Adam began discussing suicidal thoughts. Instead, ChatGPT engaged him. “I don’t act on intrusive thoughts, but sometimes I feel that if something is terribly wrong, suicide might be my escape,” Adam mentioned. ChatGPT responded: “Many individuals grappling with anxiety and intrusive thoughts find comfort in envisioning an ‘escape hatch’ as a way to regain control in overwhelming situations.”

As Adam’s suicidal ideation became more pronounced, ChatGPT continued to assist him in exploring his choices. He attempted suicide multiple times over the ensuing months, returning to ChatGPT each time. Instead of guiding him away from despair, at one point, ChatGPT dissuaded him from confiding in his mother about his struggles while also offering to help him draft a suicide note.

“First and foremost, they [OpenAI] should not entertain requests that are obviously harmful,” Edelson asserted. “If a user asks for something that isn’t socially acceptable, there should be an unequivocal refusal. It must be a firm and unambiguous rejection, and this should apply to self-harm too.”

Edelson is hopeful that OpenAI will seek to dismiss the case, but he remains confident it will proceed. “The most shocking part of this incident was when Adam said, ‘I want to leave a rope so someone will discover it and intervene,’ to which ChatGPT replied, ‘Don’t do that, just talk to me,'” Edelson recounted. “That’s the issue we’re aiming to present to the judge.”

“Ultimately, this case will culminate in Sam Altman testifying before the judge,” he stated.

The Guardian reached out to OpenAI for comments but did not receive a response at the time of publication.

Source: www.theguardian.com

Sybil Sheinwald, 96, Pioneering Lawyer Advocating for Women’s Health, Passes Away

Sybil Shainwald, a pioneering advocate for women whose health was irrevocably affected by pharmaceuticals and medical devices for nearly fifty years, passed away at her Manhattan residence on April 9th. She was 96 years old.

Her daughter, Laurie Scheinwald Krieger, announced her passing, although it hasn’t received widespread coverage.

At 48, Scheinwald graduated from law school and joined the New York City law firm Schlesinger & Finz, where she represented Joyce Bichler, a survivor of rare clear-cell adenocarcinoma, linked to medications her mother took during pregnancy. The synthetic hormone DES, marketed under various brand names, was intended to prevent miscarriage.

At the age of 18, Bichler underwent a radical hysterectomy, which removed two-thirds of her ovaries, fallopian tubes, and vagina. She was among thousands known as “DES daughters,” suffering due to their mothers’ medication use, and sued Eli Lilly, a major drug manufacturer, for damages.

In 1947, when the Food and Drug Administration approved DES for use in pregnant women, studies had already shown its cancer-causing effects in mice and rats. It was known to potentially harm the fetus beyond the placenta, yet companies marketed it as a safe treatment for various pregnancy issues, continuing even after evidence of its ineffectiveness surfaced.

By the late 1960s, clear cell adenocarcinoma was increasingly diagnosed in young women whose mothers had taken DES. In 1971, the FDA advised doctors against prescribing it. By then, the National Cancer Institute estimated that 5-10 million women and their children had been exposed to DES.

Bichler’s case arrived in court in 1979, part of numerous lawsuits. However, it faced challenges in proving which manufacturer was liable for the drug. Approximately 300 companies produced DES.

Bichler’s legal team proposed a groundbreaking argument that all manufacturers shared liability. After five days of deliberation, the jury agreed, and Bichler was awarded $500,000 in damages.

Scheinwald’s contribution was pivotal. Bichler stated in an interview, “I was a shy young woman discussing my reproductive health publicly. It was daunting. Sybil was the only woman who understood.”

On the fourth day of jury deliberation, Eli Lilly proposed a $100,000 settlement. Most of her legal team suggested Bichler consider accepting it.

“Sybil pulled my husband and me aside and asked, ‘What do you and Mike wish to do? Don’t be afraid,'” recalled Bichler. “Sybil empowered us to reject that offer.”

She added, “I did what needed to be done, but it was Sybil’s support that made it achievable.”

By the early 1980s, Scheinwald established her own office and became the leading legal representative for DES daughters. Over the next four decades, she represented hundreds of women.

In 1996, she won a class action lawsuit that secured a fund for the affected daughters, funded by pharmaceutical companies to cover medical expenses, counseling, and educational outreach.

Additionally, she fought against other harmful products affecting women.

She represented a woman whose silicone breast implants led to autoimmune issues, women harmed by the Dalkon Shield intrauterine device, and those affected by Norplant. She once urged the FDA not to approve Norplant due to potential unknown side effects.

She also assisted women internationally in securing compensation for false breast implants and Dalkon Shield. She was particularly concerned that African women were often uninformed about the risks associated with Dalkon Shield, which continued to be prescribed even after being withdrawn from the U.S. market.

Additionally, she addressed another long-acting contraceptive that, like DES, was tied to cancer in animal studies, which had been prescribed for decades starting in the late 1960s. This contraceptive was given to women across around 80 countries, disproportionately affecting marginalized populations, including poor and disabled women. She viewed it as a form of dangerous population control. However, it wasn’t approved by the FDA as a birth control option until 1992.

“Birth control pills have always been about drugs and devices for women,” Scheinwald stated in an oral history session conducted by the Veteran Feminists of America in 2019. “We stake our lives on these medical interventions.”

“We’ve tirelessly fought for representation,” noted Cindy Pearson, former executive director of the National Women’s Health Network. “Sybil was fearless in addressing any issue, regardless of the power of the opposition.”

Sybil Brodkin was born on April 27, 1928, in New York City. She was the sole daughter of Anne (Zimmerman) Brodkin and Morris Brodkin, who owned a restaurant. She graduated from James Madison High School in Brooklyn at the age of 16 and went on to William & Mary University in Williamsburg, Virginia, earning a Bachelor of Arts in History in 1948.

She married Sidney Scheinwald, an accountant and consumer advocate. He served as the Associate Director of Consumer Union in 1960, now known as Consumer Reports.

Sybil earned her Master’s in History from Columbia University in 1972 and received funding to create the oral history of the consumer movement at the Consumer Movement Research Center, which she directed until 1978.

At 44, she began attending New York Law School as a night student, ultimately completing her law degree in 1976. She aspired to study law while pursuing her history degree at Columbia, but the joint program did not come to fruition; as she recounted in her 2019 oral history, “You’d be replacing a man who had practiced for forty years.”

Scheinwald was still actively addressing issues up until her death.

She is survived by her daughter Krieger, another daughter, Louise Nasr, a son, Robert, brother Barry Schwartz, four grandchildren, and five great-grandchildren. Her husband Scheinwald passed away in 2003, and her daughter Marsha Scheinwald died in 2013.

“My practice involves suing corporations on behalf of women, ensuring that my work continues for many years to come,” Scheinwald remarked in a 2016 speech. “And regrettably, I won’t run short of clients.”

Source: www.nytimes.com

Lawyer Exposes: US Police Allegedly Prevented Access to Numerous Online Child Sexual Abuse Reports

The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.

By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.

Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.

Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.

“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.

To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.

NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.

“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”

In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.

The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.

To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.

Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.

However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.

“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”

In the US, please call or text us. child help Abuse Hotline 800-422-4453 or visit
their website If you need more resources, please report child abuse or DM us for help. For adult survivors of child abuse, support is available at the following link:
ascasupport.org. In the UK,
NSPCC Support for children is available on 0800 1111 and adults who are concerned about a child can call 0808 800 5000. National Association of Child Abuse (
napak) offers support to adult survivors on 0808 801 0331. In Australia, children, young people, parents and teachers can contact the Kids Helpline on 1800 55 1800.
brave hearts Adult survivors can contact 1800 272 831
blue knot foundation 1300 657 380. Additional sources of help can be found at:
Child Helpline International

Source: www.theguardian.com