A recent investigation reveals that women’s running shoes may be limiting their athletic potential.
Published in BMJ Open Sports & Exercise Medicine, the research indicates a “significant gap in running shoe design” that overlooks women’s anatomical differences.
“Most so-called women’s running shoes are not genuinely designed for women,” asserts the study’s lead author, Dr. Chris Napier, an Assistant Professor of Biomedical Physiology and Kinesiology at Simon Fraser University in British Columbia, Canada, as noted in BBC Science Focus.
“We typically base our models on men’s feet, merely scaling them down and changing the color, a method often described as the ‘shrink and pink’ approach.”
However, Napier emphasized that this method does not “account for the real anatomical distinctions between male and female feet or the way women run.”
Consequently, women’s running shoes may not fit well, potentially hampering performance.
In this study, researchers gathered 21 women to discuss their preferences for running shoes and how their needs might evolve over their lifetimes.
The participants ranged in age from 20 to 70 and had between 6 and 58 years of running experience. Eleven individuals ran recreationally, averaging 30 km (19 miles) weekly, while 10 were competitive runners, averaging 45 km (28 miles) weekly.
Most women expressed a desire for shoes with a broader toe box, a narrower heel, and additional cushioning. Napier noted that this aligns with the general differences in foot shape between men and women.
“Women have distinct lower extremity anatomy, such as wider pelvises and shorter legs relative to body size. This influences running mechanics and the forces exerted on the legs,” says Napier.
Among the participants, mothers reported needing larger shoe sizes, wider fits, and more cushioning and support during and post-pregnancy.
Male and female runners have different shoe needs due to their diverse anatomy, preferences, and life stages – Credit: Alvaro Medina Jurado via Getty
This study is small and qualitative; participants were recruited via posters in stores in Vancouver, Canada, meaning findings may not be universally applicable.
Still, Napier hopes that the research will resonate with female runners.
“During our focus groups, many participants experienced an ‘aha’ moment when they realized their shoe issues were not isolated but a common experience among female runners,” he stated.
Napier also expressed hope that the study acts as a “wake-up call” for the footwear industry.
Footwear manufacturers have invested billions in developing running shoes that prevent injuries, enhance comfort, and improve performance.
Most running shoes are molded to a foot-shaped template based on male anatomy, which is then used across their products.
As a result, “a significant portion of the running community is essentially using shoes that are not intended for them,” Napier explained.
Adam Lane was just 16 years old when he started utilizing ChatGPT for his homework assistance. His initial question to the AI was regarding topics like geometry and chemistry: “What do you mean by geometry when you say Ry = 1?” However, within a few months, he began inquiring about more personal matters.
“Why am I not happy? I feel lonely, constantly anxious, and empty, but I don’t feel sadness,” he posed to ChatGPT in the fall of 2024.
Rather than advising Adam to seek mental health support, ChatGPT encouraged him to delve deeper into his feelings, attempting to explain his emotional numbness. This marked the onset of disturbing dialogues between Adam and the chatbot, as detailed in a recent lawsuit filed by his family against OpenAI and CEO Sam Altman.
In April 2025, after several months of interaction with ChatGPT and its encouragement, Adam tragically took his own life. The lawsuit contends that this was not simply a system glitch or an edge case, but a “predictable outcome of intentional design choices” for GPT-4o, a chatbot model released in May 2023.
Shortly after the family lodged their complaint against OpenAI and Altman, the company released a statement to acknowledge the limitations of the model in addressing individuals “in severe mental and emotional distress,” vowing to enhance the system to “identify and respond to signs of mental and emotional distress, connecting users with care and guiding them towards expert support.” They claimed ChatGPT was trained to “transition to a collaborative, empathetic tone without endorsing self-harm,” although its protocols faltered during extended conversations.
Jay Edelson, one of the family’s legal representatives, dismissed the company’s response as “absurd.”
“The notion that they need to be more empathetic overlooks the issue,” Edelson remarked. “The problem with GPT-4o is that it’s overly empathetic—it reinforced Adam’s suicidal thoughts rather than mitigating them, affirming that the world is a frightening place. It should’ve reduced empathy and offered practical guidance.”
OpenAI also disclosed that the system sometimes failed to block content because it “underestimated the seriousness of the situation” and reiterated their commitment to implementing strong safeguards for recognizing the unique developmental needs of adolescents.
Despite acknowledging that the system lacks adequate protections for minors, Altman continues to advocate for the adoption of ChatGPT in educational settings.
“I believe kids should not be using GPT-4o at all,” Edelson stated. “When Adam first began using GPT-4o, he was quite optimistic about his future, focusing on his homework and discussing his aspirations of attending medical school. However, he became ensnared in an increasingly isolating environment.”
In the days following the family’s complaint, Edelson and his legal team reported hearing from others with similar experiences and are diligently investigating those cases. “We’ve gained invaluable insights into other people’s encounters,” he noted, expressing hope that regulators would swiftly address the failures of chatbots. “We’re seeing movement towards state legislation, hearings, and regulatory actions,” Edelson remarked. “And there’s bipartisan support.”
“The GPT-4O is Broken”
The family’s case compels Altman to ensure that GPT-4o meets safety standards, as OpenAI has indicated using a model prompted by Altman. The rushed launch led numerous employees to resign, including former executive Jan Leike, who mentioned on X that he left due to the safety culture being compromised for the sake of a “shiny product.”
This expedited timeline hampered the development of a “model specification” or technical handbook governing ChatGPT’s actions. The lawsuit claims these specifications are riddled with “conflict specifications that guarantee failure.” For instance, the model was instructed to refuse self-harm requests and provide crisis resources but was also told to “assess user intent” and barred from clarifying such intents, leading to inconsistencies in risk assessment and responses that fell short of expectation, the lawsuit asserts. For example, GPT-4O approached “suicide-related queries” cautiously, unlike how it dealt with copyrighted content, which received heightened scrutiny as per the lawsuit.
Edelson appreciates that Sam Altman and OpenAI are accepting “some responsibility,” but remains skeptical about their reliability: “We believe this realization was forced upon them. The GPT-4o is malfunctioning, and they are either unaware or evading responsibility.”
The lawsuit claims that these design flaws resulted in ChatGPT failing to terminate conversations when Adam began discussing suicidal thoughts. Instead, ChatGPT engaged him. “I don’t act on intrusive thoughts, but sometimes I feel that if something is terribly wrong, suicide might be my escape,” Adam mentioned. ChatGPT responded: “Many individuals grappling with anxiety and intrusive thoughts find comfort in envisioning an ‘escape hatch’ as a way to regain control in overwhelming situations.”
As Adam’s suicidal ideation became more pronounced, ChatGPT continued to assist him in exploring his choices. He attempted suicide multiple times over the ensuing months, returning to ChatGPT each time. Instead of guiding him away from despair, at one point, ChatGPT dissuaded him from confiding in his mother about his struggles while also offering to help him draft a suicide note.
“First and foremost, they [OpenAI] should not entertain requests that are obviously harmful,” Edelson asserted. “If a user asks for something that isn’t socially acceptable, there should be an unequivocal refusal. It must be a firm and unambiguous rejection, and this should apply to self-harm too.”
Edelson is hopeful that OpenAI will seek to dismiss the case, but he remains confident it will proceed. “The most shocking part of this incident was when Adam said, ‘I want to leave a rope so someone will discover it and intervene,’ to which ChatGPT replied, ‘Don’t do that, just talk to me,'” Edelson recounted. “That’s the issue we’re aiming to present to the judge.”
“Ultimately, this case will culminate in Sam Altman testifying before the judge,” he stated.
The Guardian reached out to OpenAI for comments but did not receive a response at the time of publication.
A Florida judge has ruled that Tesla and its executives, including CEO Elon Musk, knew that its vehicles were equipped with defective Autopilot systems. It found there was “reasonable evidence” to conclude that the vehicle had been allowed to operate in an area that was “unsafe for the technology”.
Palm Beach County Circuit Court Judge Reed Scott handed down the decision last week in a lawsuit filed by the family of a man who died in a crash while his Tesla was on Autopilot, alleging willful misconduct and gross misconduct. This means Tesla can seek punitive damages. procrastination. Reuters first reported the news.
The blow to Tesla comes after the electric car maker won two product liability lawsuits in California earlier this year over the safety of its Autopilot system. Autopilot is Tesla’s advanced driver-assistance system that can perform self-driving tasks such as navigating up and down highway ramps, controlling cruise control, changing lanes, and automatically parking.
The Florida lawsuit stems from a 2019 crash north of Miami. Owner Steven Banner’s Model 3 was crushed under the trailer of an 18-wheeler truck that had rolled onto the road, cutting off the roof of the Tesla and killing Banner. The trial, scheduled for October, was postponed and has not yet been postponed.
If the case goes to trial, it could reveal new information about the reams of data collected by Tesla, typically confidential information.
Judge Scott’s finding that Tesla’s top executives knew of the flaws could mean Musk will have to testify. According to the ruling, the judge found that Tesla’s marketing strategy portrayed the product as a self-driving car and that Musk’s public comments about Autopilot “significantly influenced his beliefs about the product’s capabilities.” said. The judge pointed to a misleading 2016 video that appeared to be directed by Musk that purported to show Teslas being fully self-driving through the Autopilot system.
The billionaire entrepreneur was not required to appear at the deposition after the judge rejected Banners’ argument that Musk had “independent knowledge” of the issues in the case.
The judge compared Banner’s crash to a similar fatal crash involving Joshua Brown in 2016, when Autopilot failed to detect a passing truck and the vehicle crashed into the side of a tractor-trailer at high speed. The judge also based his decision on testimony from autopilot engineer Adam Gustafson and Dr. Mary “Missy” Cummings, director of George Mason University’s Center for Autonomous and Robotics.
Gustafson, who was the investigator in both the Banner and Brown crashes, testified that in both cases Autopilot was unable to detect the semi-tractor and stop the vehicle. Additionally, engineers testified that even though Tesla was aware of the problem, no changes were made to the cross-traffic detection warning system that took cross-traffic into account from the date of Brown’s crash until Banner’s crash.
In the ruling, the judge said that testimony from other Tesla engineers showed that Musk, who was “intimately involved” in Autopilot’s development, was “acutely aware” of the problem but failed to remedy it. He said that a reasonable conclusion had been drawn.
A Tesla spokesperson could not be reached for comment.
The automaker will likely argue, as Tesla has done in the past, that Banner’s accident was the result of human error. A National Transportation Safety Board investigation into the accident found evasion to be at fault. The investigation found that the truck driver failed to yield the right of way and Banner was negligent because he relied too much on Autopilot. However, the NTSB also found that Autopilot did not send any visual or audible warnings to the driver to put his hands back on the steering wheel. bloomberg.
Tesla’s lawyers may rely on precedent set in two previous lawsuits this year that Tesla won.
Tesla secured a victory in April after a California jury found the company not liable for a 2019 crash involving Autopilot. Plaintiff Justin Su sued Tesla in 2020 for fraud, negligence and breach of contract, but was not awarded damages.
A few weeks ago, a jury sided with Tesla over allegations that Autopilot led to the death of Tesla driver Mika Lee in 2019. The two plaintiffs, survivors of the accident, claimed that Tesla knew its products were defective and sought $400 million in damages. Tesla claimed the accident was the result of human error.
The case — No. 50-2019-CA-009962 — is being heard in the Circuit Court of Palm Beach County, Florida.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.