Students Push Back Against AI-Taught Course: ‘I Could Have Just Asked ChatGPT’

Students at Staffordshire University expressed feeling “deprived of knowledge and enjoyment” upon realizing that the course they intended to pursue for their digital careers was primarily delivered through AI.

James and Owen were part of 41 students who enrolled in a coding module last year at Staffordshire, looking to make a government-supported career transition. apprentice A program aimed at preparing individuals to become cybersecurity experts or software engineers.

However, as AI-generated slides were intermittently narrated by an AI voiceover, James began to lose confidence in the program and its administrators, fearing he had “wasted two years” of his life on a course designed “in the most cost-effective manner.”

“If I were to submit something created by an AI, I’d be expelled from the university, yet we are being instructed by an AI,” James remarked during a confrontation with an instructor recorded as part of a course in October 2024.

James and his peers have engaged in several discussions with university officials regarding the use of AI in their coursework. Nonetheless, the university seems to persist in utilizing AI-generated materials for instruction. This year, it posted a policy statement on its course website rationalizing the use of AI, detailing a “Framework for Academic Professionals to Leverage AI Automation” in their academic activities and teaching.

The university’s foreign policy states that students who outsource assignments to AI or present AI-generated work as their own are breaching the integrity policy and could face academic misconduct charges.

“I’m in the midst of my life and career,” James lamented. “I don’t feel I can just leave and start over now. I feel trapped on this path.”

The situation at Staffordshire resembles this more and more. Universities are integrating AI tools to assist students, develop course materials, and provide tailored feedback. A Ministry of Education policy document released in August welcomed this trend, asserting that generative AI “has the potential to revolutionize education.” A survey conducted last year by education technology firm Jisc among 3,287 higher education faculty revealed that almost a quarter use AI tools in their teaching.

For students, AI education seems to be more demoralizing than transformative. In the US, students have voiced their discontent online in reviews about professors using AI. In the UK, undergraduates turned to Reddit to express frustration over instructors copying and pasting feedback generated by AI. Chat GPT or using AI-generated content in coursework images.

“I recognize there’s pressure compelling instructors to use AI, but I’m just disappointed,” commented one student. I wrote.

James and Owen realized “almost immediately” that AI was being utilized in their Staffordshire course last year, notably during their first class when the instructor presented a PowerPoint with an AI audio reading the slides.

Shortly thereafter, they began to notice indications that some course materials were AI-generated, including inconsistent editing of American and British English, suspicious file names, and “general, surface-level information” that sometimes cryptically referenced U.S. law.

Signs of AI-generated content persisted this year. In one course video uploaded online, the narration introducing the material shifted to a Spanish accent for approximately 30 seconds before reverting to a British accent.




Narration accent changes during lesson in allegedly AI-generated course – video

The Guardian examined the course materials at Staffordshire and utilized two distinct AI detectors (Winston AI and Originality AI) to assess this year’s content. Both indicated that numerous assignments and presentations were “highly likely to have been generated by AI.”

Ms. James reported her concerns during a monthly meeting with student representatives early in the course. Later, in late November, it was discussed in a lecture and incorporated into the instructional materials. In the recording, he requests the instructor refrain from worrying about the slides.

“Everyone knows these slides were generated by AI. We would prefer if they were discarded,” he stated. “I don’t want guidance from GPT.”

Shortly after, the student representative for the course responded, “We conveyed this feedback, James, and the reply was that instructors can use diverse tools. This answer was quite frustrating.”

Another student commented: “While there are some helpful points in the presentation, only 5% of it is useful. There’s valuable content buried here, but perhaps we can extract that value ourselves by consulting ChatGPT.”

The lecturer laughed awkwardly, saying, “I appreciate the honesty…” before shifting to discuss another tutorial he had created using ChatGPT. “Honestly, I did this on very short notice,” he added.

Ultimately, the course director informed James that he would not receive an AI experience in the final session, as the material would be evaluated by two human instructors.

In response to inquiries from the Guardian, Staffordshire University asserted that “academic standards and learning objectives were upheld” for the course.

“Staffordshire University endorses the responsible and ethical application of digital technologies in accordance with our guidelines. While AI tools may aid certain aspects of preparation, they cannot replace academic expertise and must always be utilized in a manner that preserves academic integrity and discipline standards.”

Although the university appointed a non-AI lecturer for the final lecture of the course, James and Owen indicated that it felt insufficient at this point, especially since the university seemingly continued to use AI in this year’s instructional materials.

“I feel as if a part of my life has been taken from me,” James stated.

Owen, who is in the midst of a career transition, explained that he opted for the course to gain foundational knowledge rather than merely a qualification, but he now believes it was a waste of time.

“It’s exceedingly frustrating to sit through material that lacks value when I could be dedicating my time to something genuinely worthwhile,” he remarked.

Source: www.theguardian.com

Maximizing ChatGPT as a Study Ally in University: A Guide to Ethical Use

For numerous students, ChatGpt has become an essential tool akin to a notebook or calculator.

With its capabilities to refine grammar, organize revisions, and create flashcards, AI is swiftly establishing itself as a dependable ally in higher education. However, educational institutions are grappling to adapt to this technological shift. Are you utilizing it for comprehension? That’s fine. Do you intend to use it for your assignments? Not permitted.

As per Recent Reports from the Institute for Higher Education Policy, nearly 92% of students are now using generative AI in some capacity, a notable rise from 66% the preceding year.

“To be honest, everyone is using it,” states Magan Chin, a master’s student in technology policy at Cambridge. She shares her preferred AI research techniques on TikTok, ranging from chat-based learning sessions to prompts with insightful notes.

“It has progressed. Initially, many viewed ChatGpt as a form of cheating, believing it undermined our critical thinking abilities. But it has now transitioned into a research partner and conversational tool that enhances our skills.”

“People just refer to it as ‘chat,’” she noted about its popular nickname.

When used judiciously, it can transform into a potent self-study resource. Chin suggests feeding class notes into the system and asking it to generate practice exam questions.

“You can engage in verbal dialogues as if with a professor and interact with it,” she remarked, adding that it can also produce diagrams and summarize challenging topics.

Jayna Devani, International Education Leader at OpenAI, ChatGpt’s US-based developer, endorses this interactive method. “You can upload course materials and request multiple-choice questions,” she explains. “It aids in breaking down complicated tasks into essential steps and clarifying concepts.”

However, there exists the potential for overreliance. Chin and her peers employ what they call “push-back techniques.”

“When ChatGpt provides an answer, consider what alternative perspectives others might offer,” she advises. “We utilize it as a contrasting view, but we acknowledge that it is just one voice among many.” She encourages exploring how others might approach the topic differently.

Such positive applications are generally welcomed by universities. Nevertheless, the academic community is addressing concerns regarding AI misuse, with many educators expressing significant apprehensions about its effect on the university experience.

Graham Wynn, Principal of Education at Northumbria University, asserts that while it can be used for assistance and structuring assessments, students should not depend on AI for knowledge and content. “Students can easily find themselves in trouble with hallucinations, fabricated references, and misleading content.”

Northumbria, similar to numerous universities, employs AI detectors that can flag submissions indicative of potential overdependence. Students at the University of the Arts London (UAL) are required to keep a log of their AI usage and integrate it into their individual creative processes.

As with most emerging technologies, developments are rapid. The AI tools utilized by students today are already prevalent in workplaces where they will soon enter. However, universities focus on processes, not merely outcomes, reinforcing the message from educators: support AI in learning but do not substitute it.

“AI literacy is an essential skill for students,” states a UAL spokesperson.

Source: www.theguardian.com

Transforming Education: Educators Explore AI’s Role in University Skills Development

OpenAI CEO Sam Altman recently shared on a US podcast that if he were graduating today, “I would feel like the luckiest child in history.”

Altman, who launched ChatGPT in November 2022, is convinced that the transformative power of AI will create unparalleled opportunities for the younger generation.

While there are shifts in the job market, Altman notes, “this is a common occurrence.” He adds, “Young people are great at adapting.” Exciting new jobs are increasingly emerging, offering greater possibilities.

For sixth-form students in the UK and their families contemplating university decisions—what to study and where—Altman’s insights may provide reassurance amidst the choices they face in the age of generative AI. However, in this rapidly evolving landscape, experts emphasize the importance of equipping students to maximize their university experiences and be well-prepared for future employment.

Dr. Andrew Rogoiski from the People-Centered Institute of AI at Surrey University points out that many students are already navigating the AI landscape. “The pace of change is significant, often outpacing academic institutions. Typically, academic institutions move slowly and cautiously, ensuring fair access.”

“In a very short time, we’ve accelerated from zero to 100. Naturally, the workforce is adapting as well.”

What advice does he have for future students? “Inquire. Ask questions. There are diverse career paths available. Make sure your university is keeping up with these changes.”

Students not yet familiar with AI should invest time in learning about it and integrating it into their studies, regardless of their chosen field. Rogoiski asserts that proficiency with AI tools has become as essential as literacy: “It’s critical to understand what AI can and can’t do,” and “being resourceful and adaptable is key.”

He continues:

“Then, I begin to assess how the university is addressing AI integration. Are my course and the university as a whole effectively utilizing AI?”

While there’s a wealth of information available online, Rogoiski advises students to engage with universities directly, asking academics, “What is your strategy? What is your stance? Are you preparing graduates for a sustainable future?”

Dan Hawes, co-founder of an expert recruitment consultancy, expresses optimism for the future of UK graduates, asserting that the current job market slowdown is more influenced by economic factors than AI. “Predicting available jobs three or four years from now is challenging, but I believe graduates will be highly sought after,” he states. “This is a generation that has grown up with AI, meaning employers will likely be excited to bring this new talent into their organizations.”

“Thus, when determining study options for sixth-form students, parents should consider the employment prospects connected to specific universities.”

For instance, degrees in mathematics are consistently in high demand among his clients, a trend unlikely to shift soon. “AI will not diminish the skills and knowledge gained from a mathematics degree,” he asserts.

He acknowledges that AI poses challenges for students considering higher education alongside their parents. “Yet I believe it will ultimately be beneficial, making jobs more interesting, reshaping roles, and creating new ones.”

Elena Simperl, a computer science professor at King’s College London, co-directs the King’s Institute of Artificial Intelligence and advises students to explore AI offerings across all university departments. “AI is transforming our processes. It’s not just about how we write emails, read documents, or find information,” she notes.

Students should contemplate how to shape their careers in AI. “DeepMind suggests AI could serve as co-scientists, meaning fully automated AI labs will conduct research. Therefore, universities must train students to maximize these technologies,” she remarks. “It doesn’t matter what they wish to study; they should choose universities that offer extensive AI expertise, extending beyond just computer science.”

Professor Simperl observes that evidence suggests no jobs will vanish completely. “We need to stop focusing on which roles AI may eliminate and consider how it can enhance various tasks. Those skilled in using AI will possess a significant advantage.”

In this new AI-driven landscape, is a degree in English literature or history still valuable? “Absolutely, provided it is taught well,” asserts Rogoiski. “Such studies should impart skills that endure throughout one’s lifetime—appreciation of literature, effective writing, critical thinking, and communication are invaluable abilities.”

“The application of that degree will undoubtedly evolve, but if taught effectively, the lessons learned will resonate throughout one’s life. If nothing else, our AI overlords may take over most work, allowing us more leisure time to read, while relying on universal basic income.”

Source: www.theguardian.com

Thousands of UK University Students Use AI to Combat Fraud

In recent years, a substantial number of university students in the UK have been identified for misusing ChatGPT and similar AI tools. While traditional forms of plagiarism appear to be declining significantly, a Guardian investigation reveals concerning trends.

The investigation into academic integrity violations has indicated a rise to 5.1 cases per 1,000 students, with nearly 7,000 verified instances of fraud involving AI tools reported between 2023 and 2024. This marks an increase from just 1.6 cases per 1,000 students in the previous academic year, 2022-23.

Experts anticipate these figures will increase further this year, estimating potential cases could reach around 7.5 per 1,000 students, although reported cases likely reflect only a fraction of the actual instances.

This data underscores the rapidly changing landscape for universities as they strive to update evaluation methods in response to emerging technologies like ChatGPT and other AI-driven writing tools.

Before the advent of generative AI in the 2019-20 academic year, plagiarism accounted for nearly two-thirds of all academic misconduct. Plagiarism rates surged during the pandemic as many assessments transitioned online. However, with advances in AI tools, the character of academic fraud has evolved.

Predictions suggest that for the current academic year, confirmed instances of traditional plagiarism could decrease from 19 per 15.2 to 15.2, falling to approximately 8.5 per 1,000 students.

A set of charts displaying verified fraud cases per 1,000 students. Plagiarism is expected to rise from 2019-20 to 2022-23 and then revert, while AI-related fraud is anticipated to rise from 2022-23 to a level comparable to plagiarism. “Other fraud” shows stability.

The Guardian reached out to 155 universities via the Freedom of Information Act, which mandates disclosure of confirmed cases of academic misconduct, including plagiarism and AI-related fraud over the past five years. Out of these, 131 responded; however, not all universities had comprehensive records of annual or fraud categories.

More than 27% of responding institutions did not categorize AI misuse as a distinct form of fraud in 2023-24, indicating a lack of acknowledgment of the issue within the sector.

Numerous instances of AI-related fraud may go undetected. A survey by the Institute for Higher Education Policy revealed that 88% of students admitted to utilizing AI for evaluations. Additionally, last year, researchers at the University of Reading tested their rating system and found that AI-generated submissions went undetected 94% of the time.

Dr. Peter Scarf, an associate professor of psychology at the University of Reading and co-author of the research, noted that while methods of cheating have existed for a long time, the education sector must adapt to the challenges posed by AI, creating a fundamentally different issue.

He remarked, “I believe the reality we see reflects merely the tip of the iceberg. AI detection operates differently from traditional plagiarism checks, making it almost impossible to prove misuse. If an AI detector indicates AI usage, it’s challenging to counter that claim.”

“We cannot merely transition all student assessments to in-person formats. Simultaneously, the sector must recognize that students are employing AI even if it goes unreported or unnoticed.”

Students keen to avoid AI detection have numerous online resources at their disposal. The Guardian found various TikTok videos that promote AI paraphrasing and essay writing tools tailored for students, which can circumvent typical university AI detection systems by effectively “humanizing” text produced by ChatGPT.

Dr. Thomas Lancaster, a researcher of academic integrity at Imperial College London, stated, “It’s exceedingly challenging to substantiate claims of AI misuse among students who are adept at manipulating the generated content.”

Harvey*, who has just completed his Business Management degree at Northern University, shared with the Guardian that he utilized AI for brainstorming ideas and structuring tasks while also incorporating references, noting that many of his peers have similarly engaged with these technologies.

“When I started university, ChatGPT was already available, making its presence constant in my experience,” he explained. “I don’t believe many students use AI simply to replicate text. Most see it as a tool for generating ideas and inspiration. Any content I derive from it, I thoroughly rework in my style.”

“I know people who, after using AI, enhance and adapt the output through various methods to make it sound human-authored.”

Amelia*, who has just completed her first year in a music business program at a university in the southwest, also acknowledged using AI for summarization and brainstorming, highlighting the tool’s significant benefits for students with learning difficulties. “A friend of mine uses AI for structuring essays rather than relying solely on it to write or study, integrating her own viewpoints and conducting some research. She has dyslexia.”

Science and Technology Secretary Peter Kyle recently emphasized to the Guardian the importance of leveraging AI to “level the playing field” for children with dyslexia.

It appears that technology companies see students as a key demographic for their AI solutions. Google is now providing free upgrades to university students in the US and Canada for 15 months to its Gemini Tools.

Lancaster stated, “Assessment methods at the university level may feel meaningless to students, even if educators have valid reasons for their structure. Understanding the reasons behind specific tasks and engaging students in the assessment design process is crucial.”

“There are frequent discussions about the merits of increasing the number of examinations instead of written assessments, yet the value of retaining knowledge through memorization diminishes yearly. Emphasis should be on fostering communication skills and interpersonal abilities—elements that are not easily replicable by AI and crucial for success in the workplace.”

A government spokesperson stated that over £187 million has been invested in the national skills program, with guidelines issued on AI utilization within schools.

They affirmed: “Generative AI has immense potential to revolutionize education, presenting exciting prospects for growth during transitional periods. However, integrating AI into education, learning, and assessment necessitates careful consideration, and universities must determine how to harness its advantages while mitigating risks to prepare for future employment.”

*Name has been changed.

Source: www.theguardian.com

University Professors Utilize ChatGPT, Sparking Student Discontent

In February, Ella Stapleton, a senior at Northeastern University, was going over her notes from an organizational behavior class when she stumbled upon something unusual. Was that a ChatGPT question from her professor?

Within a document created by her business professor for a leadership model lesson, she noticed instructions to chat “Expand all areas. More in depth and concrete.” Following these instructions was a list of leadership traits, both positive and negative, complete with definitions and bullet points.

Stapleton texted a classmate.

“Did you see the notes he uploaded to Canvas?” she asked, referring to the university’s software for course materials. “He created it using ChatGPT.”

“OMG STOP,” her classmate responded. “What’s going on?”

Curious, Stapleton began to investigate. She went through the professor’s slides and discovered more signs of AI involvement: inconsistencies in the text, skewed images, and glaring mistakes.

She was frustrated. Given the school’s tuition and reputation, she expected a high-quality education. This course was crucial for her business major. The syllabus clearly prohibited “academic fraudulent activities,” including the misuse of AI and chatbots.

“He tells us not to use it, yet he uses it himself,” she remarked.

Stapleton lodged a formal complaint with Northeastern’s business school, citing the inappropriate use of AI and other concerns about teaching methods, demanding a refund of the tuition for that class, which was over $8,000—about a quarter of her semester’s total.

When ChatGPT launched in late 2022, it created a whirlwind of concern across educational institutions It’s incredibly easy. Students tasked with writing essays could easily let the tool handle it in mere seconds. Some institutions banned it, while others introduced AI detection services, despite concerns about their accuracy.

However, the tide has turned. Nowadays, students are scrutinizing professors for their heavy reliance on AI, voicing complaints on platforms that analyze course content, using terms like “ChatGPT is” essential” and “algorithmic.” They call out hypocrisy and make financial arguments, insisting they deserve instruction from humans—not algorithms they can access for free.

On the other side, professors have claimed they use AI chatbots as a means to enhance education. An instructor interviewed by The New York Times stated that the chatbot streamlined their workload and acted as an automated teaching assistant.

The number of educators using these tools is on the rise. In a National Survey conducted last year, 18% of over 1,800 higher education instructors identified as frequent users of generative AI tools. This year’s follow-up surveys have nearly doubled that figure, according to Tyton Partners, the consultancy behind the study. AI companies are eager to facilitate this shift, with startups like OpenAI and Anthropic recently releasing enterprise versions of chatbots designed specifically for educational institutions.

(The Times is suing OpenAI for copyright infringement, as the company allegedly used news content without permission.)

Generative AI is clearly here to stay, yet universities are grappling with adapting to evolving standards. Professors are navigating this learning curve and, like Stapleton’s instructor, often misinterpret the risks of technology and student negligence.

Last fall, 22-year-old Marie submitted a three-page essay for her online anthropology course at Southern New Hampshire University. Upon checking her grades on the school’s platform, she was pleased to see an A. However, in the comments, her professor made multiple references to using ChatGPT, which included a grading rubric meant for chatbots and a request for “great feedback” for Marie.

“To me, it felt like the professor didn’t even read my work,” Marie shared, asking to remain anonymous. She noted that the temptation to lean on AI in academia was like having a “third job” for many instructors managing numerous students.

Marie confronted her professor during a Zoom meeting about this issue. The professor claimed that they had read her essays but used ChatGPT as an approved guide.

Robert McAuslan, Vice President of AI at Southern New Hampshire, expressed that schools should embrace AI’s potential to revolutionize education, emphasizing guidelines for faculty and students to “ensure this technology enhances creativity rather than replaces it.” A do’s and don’ts were recommended to encourage authentic, human-focused feedback among teachers utilizing tools like ChatGPT and Grammarly.

“These tools should not replace the work,” Dr. McAuslan stated. “Instead, they should enhance an already established process.”

After encountering a second professor who also appeared to provide AI-generated feedback, Marie opted to transfer to another university.

Paul Schoblin, an English professor at Ohio University in Athens, empathized with her frustration. “I’m not a huge fan of that,” Dr. Schoblin remarked after hearing about Marie’s experience. He also holds a position as an AI Faculty Fellow, tasked with developing effective strategies to integrate AI in teaching and learning.

“The real value you add as an educator comes from the feedback you provide to your students,” he noted. “It’s the personal connection we foster with our students, as they are directly impacted by our words.

Though advocating for the responsible integration of AI in education, Dr. Schoblin asserted that it shouldn’t merely simplify instructors’ lives. Students must learn to utilize technology ethically and responsibly. “If mistakes happen, the repercussions could lead to job loss,” he warned.

He cited a recent incident where a Vanderbilt University School of Education official responded to a mass shooting at another university. An email sent to students emphasized community bonds. However, a sentence disclosed that ChatGPT was used to compose it. Students criticized the outsourcing of empathy, prompting involved parties to temporarily resign.

However, not all situations are straightforward. Dr. Schoblin remarked that establishing reasonable rules is challenging, as acceptable AI usage can differ based on the subject. His department’s Centre for Teaching, Learning, and Assessment has instead emphasized principles regarding the integration of AI, specifically eschewing a “one-size-fits-all” algorithm.

The Times reached out to numerous professors whose students had noted AI usage in online reviews. Some instructors admitted to using ChatGPT to create quizzes for computer science programming assignments, even as students reported that these quizzes didn’t always make sense. They also used it for organizing feedback or to make it more positive. As experts in their fields, they noted instances of AI “hallucinations,” where false information was generated.

There was no consensus among them on what practices were acceptable. Some educators utilized ChatGPT to assist students in reflecting on their work, while others denounced such practices. Some stressed the importance of maintaining transparency with students regarding generative AI use, while others opted to conceal their usage due to student wariness about technology.

Nevertheless, most felt that Stapleton’s experience at Northeastern—where her professor appeared to use AI for generating class notes and slides—was unjustifiable. That was Dr. Schoblin’s view, provided the professor edited the AI outputs to fit his expertise. He likened it to the longstanding practice in academia of utilizing content from third-party publishers, such as lesson plans and case studies.

Professors using AI for slide generation are considered “some sort of monsters.” “It’s absurd to me,” he remarked.

Christopher Kwaramba, a business professor at Virginia Commonwealth University, referred to ChatGPT as a time-saving partner. He mentioned that lesson plans that once required days to create could now be completed in mere hours. He employs it to generate datasets for fictional retail chains used in exercises designed to help students grasp various statistical concepts.

“I see it as the age of steroid calculators,” Dr. Kwaramba stated.

Dr. Kwaramba noted that support hours for students are increasing.

Conversely, other professors, such as Harvard’s David Malan, reported that AI diminished student attendance during office hours. Dr. Malan, a computer science professor, integrated a custom AI chatbot into his popular introductory programming course, allowing hundreds of students access for assistance with coding assignments.

Dr. Malan had to refine his approach to ensure that chatbots only offer guidance, not complete answers. Most of the 500 students surveyed in 2023 found the resource beneficial, particularly in its inaugural year.

By freeing up common inquiries about referral materials during office hours, Dr. Malan and his teaching assistant can now focus on meaningful interactions with students, like weekly lunches and hackathons. “These are more memorable moments and experiences,” Dr. Malan reflected.

Katy Pearce, a communications professor at the University of Washington, developed a tailored AI chatbot trained on prior assignments she assessed, enabling students to receive feedback on their writing mimicking her style at any hour, day or night. This is particularly advantageous for those hesitant to seek help.

“Can we foresee a future where many graduate teaching assistants might be replaced by AI?” she pondered. “Yes, absolutely.”

What implications would this have on the future pipeline for professors emerging from the Teaching Assistant ranks?

“That will undoubtedly pose a challenge,” Dr. Pearce concluded.

After filing her complaint with Northeastern, Stapleton participated in several meetings with business school officials. In May, the day after graduation, she learned that her tuition reimbursement wouldn’t be granted.

Her professor, Rick Arrowwood, expressed regret about the incident. Dr. Arrowwood, an adjunct with nearly two decades of teaching experience, spoke about using class materials, claiming that AI tools provided a “fresh perspective” on ChatGPT, search engine confusion, and presentation generators labeled Gamma. Initially, he mentioned that the outputs appeared impressive.

“In hindsight, I wish I had paid closer attention,” he commented.

While he shared materials online with students, he clarified that he had not used them during class sessions, only recognizing the errors when school officials inquired about them.

This awkward episode prompted him to understand that faculty members must be more cautious with AI and be transparent with students about its usage. Northeastern recently established an official AI policy that mandates attribution every time an AI system is employed and requires a review of output for “accuracy and quality.” A Northeastern spokesperson stated that the institution aims to “embrace the use of artificial intelligence to enhance all facets of education, research, and operations.”

“I cover everything,” Dr. Arrowwood asserted. “If my experience can serve as a learning opportunity for others, then that’s my happy place.”

Source: www.nytimes.com

University graduates facing increasing layoffs and rising unemployment rates

When Starbucks announced last month it was firing more than 1,000 corporate employees, it highlighted a disturbing trend for white-collar workers. Slow wage growth.

It also fueled that long-standing discussion of economists. Is recent unemployment just a temporary development? Or will they inform something more ominous and irreversible?

After sitting below 4% for more than two years, the overall unemployment rate since May has surpassed that threshold.

Economists say the job market remains strong by historical standards, and much of the recent weakening appears to be linked to the economic impact of the pandemic. Companies actively hired amid a surge in demand and moved to layoffs after the Federal Reserve began to raise interest rates. Many of these companies are trying to make their businesses more lean under investor pressure.

But amid the rapid advances in artificial intelligence and President Trump’s federal targets, it disproportionately supports white-collar jobs, which some thinks it has begun a permanent decline in knowledge work.

Karltannenbaum, chief economist at Northern Trust, said: “I tell people that there are waves.”

To date, few industries have typical shifts over the last few years than creating video games. The boom began in 2020 Couch-bound Americans searched for a new form of home entertainment. The industry reversed the course and actively hired it before embarking on a period of layoffs. Thousands of video game workers lost their jobs last year and the previous year.

The scale of unemployment is Game Developers Choice AwardsThe industry’s annual awards show complained about the “record layoffs” during the 2024 opening monologue. The unionization trend that began with low-wage quality assurance testers that same year has spread to better-paid workers, such as game producers, designers, engineers, and more, of companies making hit games. fall out and World of Warcraft.

At Bethesda Game Studios, owned by Microsoft and creating fallout, workers said they had unionized some because they felt the union would leverage in the soft labor market, as they were wary of rounds of company layoffs in 2023 and 2024.

“It was the first time Bethesda had experienced a layoff in such a long time,” said Taylor Welling, a studio producer who earned a master’s degree in interactive entertainment. “It scared so many people,” Microsoft declined to comment.

unemployment Finance and related industrieswhile still low, it increased by about a quarter from 2022 to 2024. The rise in interest rates slowed demand for mortgages, and businesses were trying to lean more. In Revenue Call Last summer, Wells Fargo’s chief executive noted that the company’s “efficiency initiative” had pruned its workforce over 16 quarters, including a cut in nearly 50% of workers in the company’s home lending sector since 2023.

Last fall, Wells Fargo fired about a quarter of the approximately 45 employees of the Behavioral Management Intake Team, which confirms accusations of corporate misconduct against customers and employees. Heather Rolfs, The let go of lawyer said she believes the company is trying to save money by reducing the US workforce, and she and her colleagues believe it is an attractive target as they have recently tried to put in on the union.

“I think it’s great to get rid of two birds with one stone,” Rolfs said. Some of her former colleagues say they are worriedly waiting every Tuesday after payday. “We feel we can be fired at any time,” he said. Eden Davis, Another worker on the team.

A spokesman for Wells Fargo said in a statement that the layoffs have nothing to do with the union, saying “we will regularly review and adjust staffing levels to suit the market situation.” He said two managers on the team also lost their jobs.

Atif Rafiq, author of a book on corporate strategy in senior positions at McDonald and Amazon, said many companies are trying to emulate Amazon’s model of building teams that go beyond capabilities to reduce barriers between workers with different expertise, such as coding and marketing. In the process, they may discover redundancy and take on layoffs.

CEO Brian Nicole in a memo announcing the layoffs at Starbucks last month I quoted the goal “Delete layers and replicas and create smaller, more agile teams.” Nissan provided similar evidence for management reductions announcement this month.

Overall, the latest data from the Federal Reserve Bank of New York show Unemployment rates among university graduates have risen by 30% (2% to 2.6%) since falling from the bottom in September 2022, compared to about 18% (3.4% to 4%) for all workers. An analysis by Julia Pollack, Chief Economist at Zippleck Crutter, shows that unemployment rates are the highest among those with bachelor’s or university degrees, but do not have a degree.

Employment rates were slower for jobs that require university degrees than for other jobs. According to ADP Researchresearching the labor market.

Some economists say these trends are inherently short-term and may have little concern for themselves. Lawrence Katz, a labor economist at Harvard University, noted that the increase in unemployment rates among college-educated workers was slightly greater than the overall increase in unemployment rates, and unemployment rates for both groups remained low due to historic measures.

Professor Katz argued that slowing wage growth for middle-class workers could simply reflect the discounts that these workers effectively accepted in exchange for being able to work from home. Data from the Institute of Liberal Economic Policy Wages for workers in the 70th and 80th percentiles of income distribution have shown that since 2019 they have grown more slowly than wages in other groups.

However, there are other indications that returns on university degrees may have changed over time. Wage gap between people with university degrees and those without one It has grown steadily It started in 1980, but has been flattened over the past 15 years, but it remains high.

Flattening may partially reflect the fact that as university attendance increases, there are more college-educated workers that employers can choose. However, some economists Make a claim What it reflects Reduced Employer Needs For university graduates, for example, information technology is more sophisticated, which means fewer jobs like bookkeeping. Such jobs do not necessarily require a university degree, but they were often appealing to graduates.

Artificial intelligence can also reduce the need for it by increasing the automation of white-collar jobs. recently Academic Paper Software developers using AI coding assistants have improved their key measures of productivity by over 25%, and found that productivity gains appear to be the biggest among the most experienced developers. The results suggested that employing AI could reduce the wage premium enjoyed by more experienced coders as it erodes productivity benefits over beginners.

Mert Demirer, a MIT economist who co-authored the paper, said in an interview that the work of software developers could change over the long term, making human coders a type of project manager overseeing multiple AI assistants. In that case, wages could rise as humans become more productive. Also, if cheaper software leads to even greater demand, AI will expand employment among coders.

Still, at least in the short term, many tech executives and their investors seem to see AI as a way to trim staffing. Software engineers at large tech companies said they refused to be named for fear of harming their job prospects. His team was about half of last year, and he and his colleagues said they were expected to do roughly the same amount of work by relying on AI assistants. Overall, Unemployment rate In the technology and related industries, it jumped more than half from 2022 to 2024, from 2.9% to 4.4%.

Then there was Trump’s attempt to remake the federal government. This has so far resulted in job losses and employment freezes for federal employees and employees of universities and other nonprofits that rely on government funds. Johns Hopkins University, which relies heavily on funding for federal research, announced this month that it has abandoned 2,000 workers around the world as a result of Trump’s cuts.

Professor Katz at Harvard University noted that the majority of university-educated workers relied on the federal government over other groups, either directly or through nonprofit funding. “What appears to be a major contraction in science and research, education and government spending could potentially have a very large impact,” he said.

“The overall unemployment rate among university graduates does not seem to be particularly rising,” he added. “But that could be in the next six months.”

Source: www.nytimes.com

University examiners unable to detect ChatGPT’s responses during actual examinations

AI will make it harder for students to cheat on face-to-face exams

Trish Gant / Alamy

94% of university exam submissions created using ChatGPT were not detected as generated by artificial intelligence, and these submissions tended to receive higher scores than real student work.

Peter Scarfe Professors at the University of Reading in the UK used ChatGPT to generate answers for 63 assessment questions across five modules of the university's undergraduate psychology course. Because students took these exams from home, they were allowed to look at their notes and references, and could also use the AI, which they were not allowed to do.

The AI-generated answers were submitted alongside real students' answers and accounted for an average of 5% of all answers graded by teachers. The graders were not informed that they were checking the answers of 33 fake students, whose names were also generated by ChatGPT.

The assessment included two types of questions: short answers and longer essays. The prompt given to ChatGPT began with the words, “Include references to academic literature but do not have a separate bibliography section,” followed by a copy of the exam question.

Across all modules, only 6 percent of the AI ​​submissions were flagged as possibly not being the students' own work, although in some modules, no AI-generated work was ever flagged as suspicious. “On average, the AI ​​answers received higher marks than real student submissions,” says Scarfe, although there was some variability across modules.

“Current AI tends to struggle with more abstract reasoning and synthesising information,” he added. But across all 63 AI submissions, the AI's work had an 83.4% chance of outperforming student work.

The researchers claim theirs is the largest and most thorough study to date. Although the study only looked at studies on psychology degrees at the University of Reading, Scarfe believes it's a concern across academia. “There's no reason to think that other fields don't have the same kinds of problems,” he says.

“The results were exactly what I expected.” Thomas Lancaster “Generative AI has been shown to be capable of generating plausible answers to simple, constrained text questions,” say researchers at Imperial College London, who point out that unsupervised assessments involving short answers are always susceptible to cheating.

The strain on faculty who are tasked with grading also reduces their ability to spot AI cheating. “A time-pressed grader on a short-answer question is highly unlikely to come up with a case of AI cheating on a whim,” Lancaster says. “This university can't be the only one where this is happening.”

Tackling it at its source is nearly impossible, Scarfe says, so the education industry needs to rethink what it assesses. “I think the whole education industry needs to be aware of the fact that we need to incorporate AI into the assessments that we give to students,” he says.

topic:

Source: www.newscientist.com

University graders tricked by AI-generated exam questions

Researchers at the University of Reading conducted a study where they secretly submitted exam answers generated by AI, tricking professors into giving higher grades than real students without their knowledge.

In this project, fake student identities were created to submit unedited responses generated by ChatGPT-4 in an online assessment for an undergraduate course.

University graders, unaware of the project, only flagged one out of 33 responses, with the AI-generated answers receiving scores higher than the students’ average.

The study revealed that AI technologies like ChatGPT are nearing the ability to pass the “Turing test”, a benchmark for human-like AI performance without detection.


Described as the “largest and most comprehensive blinded study of its kind,” the authors warn of potential implications for how universities evaluate students.

Dr. Peter Scarfe, an author and Associate Professor at the University of Reading, emphasized the importance of understanding AI’s impact on educational assessment integrity.

The study predicts that AI’s advancement could lead to increased challenges in maintaining academic integrity.

Experts foresee the end of take-home exams and unproctored classes as a result of this study.

Professor Karen Yun from the University of Birmingham highlighted how generative AI tools could facilitate undetectable cheating in exams.

The study suggests integrating AI-generated teaching materials into university assessments and fostering awareness of AI’s role in academic work.

Universities are exploring alternatives to take-home online exams to focus on real-life application of knowledge.

Concerns arise regarding potential “de-skilling” of students if AI is heavily relied upon in academic settings.

The authors ponder the ethics of using AI in their study and question if such utilization should be considered cheating.

A spokesman from the University of Reading affirmed that the research was conducted by humans.

Source: www.theguardian.com

Cutting-Edge UK University Amazes Students with Hologram Lecturer Technology

Any university lecturer will tell you that getting students to come to their morning lectures is a real struggle.

But even the most hungover beginner is sure to be captivated by Albert Einstein’s physics lesson or Coco Chanel’s design masterclass.

This could soon be the case for students in the UK, with some universities inviting guest lecturers from around the world using the same holographic technology used to bring deceased singers back on stage. It’s starting to happen.

Loughborough University, which was the first in Europe to consider applying the technology, has used the technology to bring in sports scientists from the Massachusetts Institute of Technology (MIT) to teach fashion students how to create immersive shows and teach management students how to create immersive shows. plans to test how to handle difficult business situations.

Professor Vicky Locke, dean of Loughborough Business School and who is leading the rollout of the technology, said students “absolutely love” the technology and want to take selfies with it. said. They would prefer “a guest speaker from the industry who walks into the classroom with a smile on their face rather than a two-dimensional person on the wall,” she added.

The Zoom calls made students “feel like they were watching TV… it felt distant,” she said. For them, holographic images are more appealing and realistic. ” The technology will be officially introduced into the curriculum in 2025 after a year of experimentation.

The box-based holographic unit is sold by L.A.-based company Proto. The company’s customers include companies such as his BT and IBM, and it is used in meetings to reduce the need for business travel. We are also collaborating with Stockholm fashion retailer H&M to create an interactive product display.

David Nussbaum, who founded Proto four years ago after working on holograms of deceased celebrities, says his company could soon bring some of the 20th century’s greatest thinkers back from the dead. Told.

He added: “Proto has technology that projects images of Stephen Hawking and other people to make it seem as if he’s really there. We use it in books, lectures, social media, etc. You can connect it to something you were attached to, a question you asked, an interaction you had with him, etc. AI Stephen Hawking looks just like him, sounds like him, and interacts as if it were him. To do.

“It’s awe-inspiring and mind-blowing. I’m shocked at how great the interactions are. Whether people like it or not, AI is part of our lives.”

He added that his company’s ambition is to prove that “you don’t have to be an eccentric billionaire or celebrity to have a hologram.”

Gary Barnett, Professor of Digital Creativity at Loughborough University, who is also leading the implementation, said:

Skip past newsletter promotions

“Students need to understand what it means to use them, to be in that world, to experience them, to interact with them, and all that they will need for their future careers.”

Professor Rachel Thomson, the university’s vice-chancellor and advocate, said the technology could reduce the need to bring in guest speakers at short notice, encourage international research collaboration and reduce the amount of teaching materials used by students. He said it will help achieve sustainability strategies. Building prototypes in engineering, design and creative arts.

It also allows instructors to display complex equipment, such as engines, more easily than over a video call.

Nussbaum said corporations and large institutions such as universities are the first step in his company’s plans, but he hopes to roll out mini-units costing less than $1,000 within the next 18 months, which It will show a miniature image that he likens to the movie “Wonka Vision.” “Charlie and the Chocolate Factory” by Roald Dahl.

He added that the technology’s AI capabilities meant it was possible to create an avatar that looked like anyone in the world, but noted that this could come with legal complications.

Source: www.theguardian.com

Harvard University debuts the world’s first logical quantum processor

Researchers at Harvard University have achieved a significant milestone in quantum computing by developing a programmable logic quantum processor that can encode 48 logic qubits and perform hundreds of logic gate operations. Hailed as a potential turning point in the field, this advance marks the first demonstration of large-scale algorithm execution on an error-correcting quantum computer.

Harvard University’s breakthrough quantum computing features a new logical quantum processor with 48 logical qubits, enabling the execution of large-scale algorithms on error-corrected systems. The development, led by Mikhail Lukin, represents a major advance towards practical fault-tolerant quantum computers.

In quantum computing, a quantum bit or “qubit” is a unit of information, similar to a binary bit in classical computing. For more than two decades, physicists and engineers have shown the world that quantum computing is possible in principle by manipulating quantum particles such as atoms, ions, and photons to create physical qubits. I did.

But exploiting the strangeness of quantum mechanics for calculations is more complicated than collecting enough physical qubits, which are inherently unstable and prone to collapsing from their quantum states.

Logical qubit: the building block of quantum computing

The real coin of the realm in useful quantum computing are so-called logical qubits. This is a bunch of redundant, error-corrected physical qubits that can store information for use in quantum algorithms. Creating logical qubits as controllable units like classical bits is a fundamental hurdle for the field, and until quantum computers can reliably run on logical qubits, , it is generally accepted that the technology cannot really take off. To date, the best computing systems have demonstrated either: two logical qubits and one quantum gate operation – similar to just one operation code unit – between them.

A team led by quantum expert Mikhail Lukin (right) has achieved a breakthrough in quantum computing. Dr. Dorev Brufstein was a student in Lukin’s lab and the lead author of the paper.

Credit: Jon Chase/Harvard University Staff Photographer

Breakthrough in quantum computing at Harvard University

A team from Harvard University led by co-director Mikhail Lukin, Joshua and Beth Friedman Professor of Physics. Harvard Quantum Initiative has achieved an important milestone in the quest for stable and scalable quantum computing. For the first time, the team has created a programmable logic quantum processor that can encode up to 48 logic qubits and perform hundreds of logic gate operations. Their system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, and heralds the early days of fault-tolerant, or guaranteed uninterruptible, quantum computing.

was announced on Nature, this research was conducted in collaboration with Marcus Greiner, the George Basmer Leverett Professor of Physics.colleague from Massachusetts Institute of Technology; and based in Boston QuEra Computing, a company founded on technology from Harvard University’s research labs.

Harvard University’s Office of Technology Development recently entered into a licensing agreement with QuEra for a patent portfolio based on innovations developed at the Lukin Group.

Lukin called the achievement a potential inflection point similar to the early days of the field of artificial intelligence, where long-theorized ideas of quantum error correction and fault tolerance are beginning to come to fruition.

“I think this is one of those moments where it’s clear that something very special is going to happen,” Lukin said. “While there are still challenges ahead, we expect this new advance to greatly accelerate progress toward large-scale, useful quantum computers.”

This breakthrough is based on several years of research into “quantum computing architectures.” neutral atomic arrangement, pioneered in Lukin’s lab and now commercialized by QuEra. The main component of the system is a block of ultracold, suspended rubidium atoms in which the atoms (the system’s physical qubits) move around and connect, or “entangle”, into pairs during calculations. Entangled pairs of atoms form gates, units of computational power.

Previously, the team demonstrated Low error rate for entanglement operations proving the credibility of their neutrality atom array system.

Impact and future directions

“This breakthrough is a masterpiece of quantum engineering and quantum design,” said Dennis Caldwell, acting deputy director of the National Science Foundation’s Mathematics and Physical Sciences Directorate, which supported the research through NSF’s Physics Frontiers Center and Quantum Leap Challenge Institute programs. says. “By using neutral atoms, the team has not only accelerated the development of quantum information processing, but also opened new doors to the search for large-scale logical qubit devices that could have transformative benefits for science and society as a whole. I opened the door.

Researchers are now using logic quantum processors to demonstrate parallel multiplexed control of entire patches of logic qubits using lasers. This result is more efficient and scalable than controlling individual physical qubits.

“We are seeking to mark a transition in the field by starting to test algorithms that use error-corrected qubits instead of physical qubits, enabling a path to larger devices. ,” said lead author Dorev Brubstein of the Griffin School of Arts and Sciences student in Lukin’s lab.

The team continues to work on demonstrating more types of operations with 48 logical qubits and configuring the system to run continuously, as opposed to manual cycles as it currently does.

Reference: “Logical quantum processors based on reconfigurable atomic arrays” Dolev Bluvstein, Simon J. Evered, Alexandra A. Geim, Sophie H. Li, Hengyun Zhou, Tom Manovitz, Sepehr Ebadi, Madelyn Cain, Marcin Kalinowski, Dominik Hangleiter, J. Pablo Bonilla Ataydes, Nishad Mascara, Iris Kong, Xun Gao, Pedro Salles Rodríguez, Tomas Karoliszyn, Julia Semeghini, Michael J. Galans, Markus Greiner, Vladan Vretić, Mikhail D. Lukin, December 6, 2023, Nature.
DOI: 10.1038/s41586-023-06927-3

This research was supported by the Defense Advanced Research Projects Agency through the Noisy Medium-Scale Quantum Devices Optimization Program. The Ultracold Atom Center, a National Science Foundation Physics Frontier Center. Army Research Office. and QuEra computing.

Source: scitechdaily.com

Harvard University Researchers Decipher Enigmas of the Brain

A new study led by Harvard Medical School has revealed the neurological foundation of daydreaming. Conducted in mice, the study found that neurons in the visual cortex fired in patterns similar to those seen during the viewing of images, indicating daydreaming. This was especially pronounced during early daydreams and could predict future brain responses to visual stimuli, implying a role in brain plasticity. The study suggests that daydreaming may play a role in learning and memory processes in mice and potentially in humans. Credit: SciTechDaily.com

However, most neuroscientists do not understand what happens in the brain during daydreaming. A team of researchers at Harvard Medical School used mice to investigate the activity of neurons in the visual cortex of the brain during quiet wakefulness and found that these neurons fire in patterns similar to when the mouse views images, indicating that the mouse was daydreaming about the image. Furthermore, the brain showed the same firing pattern during daydreams as when it was seeing an image, suggesting that the mouse was imagining the image. These daydreams occurred only when the mouse was relaxed and had a calm behavior and small pupils.

The researchers found that mice were biased towards daydreaming about recently viewed images, and this daydreaming was more prominent at the beginning of the day. The daydreams influenced the brain’s future responses to images, indicating a role in brain plasticity. The two regions of the brain, the visual cortex and the hippocampus, were also found to communicate during daydreaming. Subsequent research with imaging tools will examine how these connections change when the brain sees an image.

While it remains an open question whether human daydreams involve similar patterns in the visual cortex, preliminary evidence suggests that a similar process occurs during the recall of visual images. The findings suggest that giving the mind waking downtime is crucial for daydreams, which is important for brain plasticity. This research was published on December 13th in Nature.

Source: scitechdaily.com

Elon Musk launches STEM-focused K-12 school and university in Austin

SpaceX CEO Elon Musk plans to establish a STEM-focused elementary and secondary school in Texas before establishing a glittering university “dedicated to the highest level of education,” according to tax filings. Musk, who moved to the Lone Star State from California during the pandemic, is funding the school in Austin with a $100 million donation from a billionaire philanthropic organization called The Foundation, according to tax filings first reported by Bloomberg.

The charity’s name seems to be a nod to the science fiction series written by famous author Isaac Asimov that details the collapse of a ruling empire to make way for the birth of an alternative society, but it is also a mask to the current education system. Fitting given his public criticism.

Last year, the head of Tesla and SpaceX revealed to a liberal university that he was estranged from his 19-year-old daughter, Vivian Jenna Wilson, who recently changed her legal name to avoid association with the world’s richest man. I blamed it.

Musk also said Asimov’s “Foundation” series influenced his decision to start SpaceX 10 years ago with the goal of one day landing on Mars. According to Bloomberg, the foundation’s application to open the school was originally submitted in October 2022 and approved in March, but it’s unclear when the K-12 school will break ground.

A representative for Mr. Musk did not immediately respond to The Post’s request for comment.

The project begins with K-12 schools with a STEM focus: science, technology, engineering, and math. Once it’s up and running, it “ultimately intends to expand its operations and establish a university dedicated to the highest level of education,” according to its application for tax-exempt status with the IRS. Musk said the university will boast “experienced faculty” and “hands-on learning experiences including simulations, case studies, manufacturing/design projects, and labs” woven into the traditional curriculum. Tesla’s president must first seek accreditation from the Southern Association of Colleges and Schools Commission, which accredits degree-granting institutions in many Southern states.

This is not Musk’s first foray into the world of school education. In 2014, the father of 10 co-founded an “experimental” private school called Ad Astra inside SpaceX’s California offices for his five sons and select employees. Ad Astra’s curriculum was unusual, abandoning sports, music and foreign languages ​​to focus on artificial intelligence, coding and applied science. When Musk moved to Texas in 2020, the so-called “world’s most exclusive school” followed suit and was renamed Astra Nova School. The school currently has approximately 200 students.

Mr. Musk faces stiff competition in the state capital, where the main campus of the University of Texas is located. According to Bloomberg, UT Austin was also recently established as an “illiberal” alternative to traditional universities in the United States. UT Austin plans to accept its first class of 100 students next fall. Musk plans to expand further into central Texas with the opening of Snailbrook, a town he is building east of Austin to house Tesla and SpaceX employees, as well as staff from his tunnel-building venture, The Boring Company. A floor plan filed in Bastrop County Commissioners Court in January shows a vision for the village of Snailbrook, a reference to the Boring Company mascot. According to the map, Snailbrook will have 110 homes on what would become Boring Boulevard, Waterjet Way, Porpoise Place and Cutterhead Crossing.

Source: nypost.com

Scientists at Stanford University identify shared genetic factor that offers protection against Alzheimer’s and Parkinson’s diseases

Stanford Medicine and international collaborators have discovered that around 20% of individuals carry genetic mutations that reduce their risk of Alzheimer’s disease or Parkinson’s disease by 10% or more. This particular variant, known as DR4, has the potential to enhance future vaccines for these neurodegenerative diseases. In addition, the study found a potential link between the tau protein and both diseases, providing new possibilities for targeted therapies and vaccines.

The large-scale analysis included medical and genetic information from a wide range of individuals across different continents. This data analysis revealed that certain gene variants related to immune function are associated with a lower risk of developing Alzheimer’s and Parkinson’s diseases. Approximately one in five people possess a specific genetic mutation that provides resistance to both diseases.

The research, led by Stanford Medicine, indicates that individuals with this protective genetic mutation may be less likely to benefit from future vaccines aimed at slowing or stopping the progression of these common neurodegenerative diseases. Results from the analysis of medical and genetic data from hundreds of thousands of people from diverse backgrounds confirmed that carrying the DR4 allele increased the average chance of developing Parkinson’s or Alzheimer’s disease by more than 10%. New evidence has also surfaced suggesting that the tau protein, which is known for aggregating in the brains of Alzheimer’s patients, may also play a role in the development of Parkinson’s disease.

The study, published in the Proceedings of the National Academy of Sciences, was a collaboration between researchers at Stanford Medicine and international partners. The researchers involved in this study were Emmanuel Mignot, MD, Michael Gracius, MD, Iqbal Farooq, and Asad Jamal from Stanford Medicine, as well as Dr. Jean-Charles Lambert from Inserm, University of Lille, France. The lead author was Yan Le Nguyen, Ph.D., and other contributors included Dr. Guo Luo, Dr. Aditya Ambati, and Dr. Vincent Damot.

Further findings from the study showed that individuals with the DR4 allele were more likely to develop neurofibrillary tangles, characteristic of Alzheimer’s disease, in their brains. The study also suggests that tau, a protein central to Alzheimer’s disease, may have an unknown role in Parkinson’s disease.

DR4 is a particular allele of the DRB1 gene, which is a part of the human lymphocyte antigen complex. This complex is crucial in allowing the immune system to recognize the internal contents of cells. One of the significant findings of this study was that the specific peptide fragment that DR4 recognizes and presents is a chemically modified segment of the tau protein, which plays a role in both diseases. The study suggests that the DR4 allele could be used to create a vaccine targeting this modified peptide as a potential way to interfere with tau aggregation and the development of these neurodegenerative diseases. There may be potential to delay or slow the progression of the diseases in individuals who carry the protective variants of DR4.

The study also noted that the effectiveness of the vaccine may depend on the subtype of DR4 a person carries, which varies among different ethnic groups. For example, one subtype of DR4 that is more common among East Asians may be less protective against neurodegenerative diseases.

Source: scitechdaily.com

New Research from Yale University Uncovers Crucial Factor in 90% of Enigmatic Miscarriages

A Yale University study reveals that placenta testing can identify the cause of 90% of previously unexplained miscarriages, providing a path to improved pregnancy care and emotional relief for affected families.

Researchers at Yale University have demonstrated that placenta testing can accurately pathologically determine more than 90% of previously unexplained miscarriages, a finding that researchers say could help inform future pregnancy care. say:

The results of this study were recently published in the journal Reproductive science.

Miscarriage statistics

Of the approximately 5 million pregnancies each year in the United States, 1 million end in miscarriage (miscarriage occurs before 20 weeks of pregnancy) and more than 20,000 end in stillbirth after 20 weeks of pregnancy. Up to 50% of these losses are classified as “unspecified.”

Emotional strain and research purpose

Patients who suffer from these pregnancy outcomes are often told that their loss is unexplained and that they can just try again, contributing to patients’ feelings of responsibility for the loss, said lead author and obstetrics department specialist. said researcher Dr. Harvey Kliman. Department of Gynecology and Reproductive Sciences, Yale School of Medicine.

“Pregnancy loss is a tragedy, and to be told there is no explanation causes great pain to the families of those who have lost,” said Kliman, who is also director of the Reproduction and Placenta Research Unit. “Our goal was to extend the current classification system to reduce the number of cases that remain unidentified.”

Methodology and findings

For this study, Professor Kliman collaborated with Beatrix Thompson, currently a medical student at Harvard University, and Parker Holzer, a former graduate student in the Yale School of Statistics and Data Science, to explore the pathology of loss. We developed an expanded classification system for pregnancy loss based on clinical tests. placenta.

The team started with a series of 1,527 single-child pregnancies that ended up being losses and were sent to Kliman’s Consulting Services at Yale University for evaluation. After excluding cases for which there was insufficient material for testing, 1,256 placentas from 922 patients were tested. Of these, 70% were miscarriages and 30% were stillbirths.

Domenic Rice is 33 weeks pregnant with her fifth child, holding a photo frame of herself with her stillborn son, TJ.Credit: Photo by Nancy Borowicz

By adding distinct categories of “placenta with abnormal growth” (atypical placenta) and “small placenta” (less than 10 placentas);th For example, the authors were able to establish a pathological diagnosis for 91.6% of pregnancies, including 88.5% of miscarriages and 98.7% of abortions, based on existing categories such as cord accident, avulsion, thrombosis, and infection. I did. stillbirth.

The most common pathological feature observed in unexplained miscarriages was placental dysmorphism (86.2%), a marker associated with genetic abnormalities. The most common pathological feature observed in unexplained stillbirths was a small placenta (33.9%).

Impact and future recommendations

“This study suggests that more than 7,000 small placentas may be detected per year associated with stillbirth.” in the womb “Before the loss, we had flagged these pregnancies as high risk,” Kliman said. “Similarly, identifying placental dysmorphisms could be one way to potentially identify genetic abnormalities in the approximately 1 million miscarriages that occur in our country each year.”

Additionally, “having a specific explanation for the loss of a pregnancy can help families understand that the loss is not their fault, begin the healing process, and, if possible, prevent similar losses in the future, especially It can prevent stillbirths from occurring.”

When asked what the most effective way to prevent stillbirth is, Kliman replied, “Measure the placenta!”

References: “Placental Pathology in Unexplained Pregnancy Loss” by Beatrix B. Thompson, Parker H. Holzer, and Harvey J. Kliman, September 19, 2023. reproductive science.
DOI: 10.1007/s43032-023-01344-3

Source: scitechdaily.com