US Student Handcuffed After AI Mistakes Bag of Chips for Gun in Baltimore

A system powered by artificial intelligence (AI) mistakenly identified a high school student’s Doritos bag as a firearm, prompting local authorities to be notified that the student was armed.

Taki Allen was enjoying snacks with friends outside Kenwood High School in Baltimore on Monday night when an armed police officer approached her.

“Initially, I was unsure of their intentions until they started approaching me with weapons drawn, ordering me to ‘Get on the ground,’ and I thought, ‘What is happening?'” Allen recounted to WBAL-TV 11 News.

Allen stated that they forced him to his knees, handcuffed him, and conducted a search but found nothing. They then showed her the image that triggered the alarm.

“I was just holding a bag of Doritos, and they mentioned it resembled a gun because it had two hands with a finger protruding,” Allen explained.

Last year, high schools in Baltimore County began using a gun detection system that leverages AI and school cameras to identify potential weapons. If anything suspicious is detected, both the school and police are notified.

In a letter to families, which was acquired by WBAL TV 11 News, the school stated: “We recognize how distressing this situation must have been for the individual who was searched as well as for other students who witnessed the event. Our counselors are ready to provide direct support to those involved and are available to talk with anyone needing assistance.”

Baltimore County police informed the media: “Officers from Essex Precinct 11 responded to Kenwood High School after a report of an individual carrying a weapon. Upon arrival, they searched the individual and confirmed that he did not possess a weapon.”

“Nobody wants their child to experience this. No one wants such incidents to occur,” said Allen’s grandfather, Lamont Davis, to the news station.

Source: www.theguardian.com

Ex-Michigan Student Claims He Developed Cancer After Using Chemistry Program Labeled “Harmless”

A former Michigan graduate student is taking action against the university, claiming that her thyroid cancer is linked to her time there. She stated that her exposure to pesticides was deemed “harmless,” according to her and her legal team’s claims made on Monday.

Linglong Wei was diagnosed with thyroid cancer on June 26th of last year, attributing her condition to her experiences at MSU between 2008 and 2011 in a lawsuit filed in Ingham County Circuit Court.

According to the civil suit, “In Wei’s field studies, Michigan State University required her to apply excessive amounts of harmful pesticides and herbicides.”

Wei alleges exposure to several herbicides, such as dichloride, glyphosate, and oxyflufen, noting that they are linked to cancer.

The lawsuit claims Wei was not adequately trained and did not receive the necessary protective gear to handle such hazardous substances.

Looking back, Wei criticized the university for failing to implement stronger safety protocols.

“During my time as a student at MSU, I voiced my concerns, but no one listened,” Wei told reporters in Lansing.

“I felt afraid due to the department’s reactions. I didn’t strongly advocate for my safety, especially when I was told that exposure was safe.”

Wei, an international student from China, mentioned that the cancer left lasting marks on her throat, and she worries about her prospects of having children.

She speculated that MSU ignored her concerns.

“International students often feel overlooked, assuming their time here is temporary and their concerns go unheard,” Wei stated.

Maya Green, a former student lawyer, highlighted her client’s inadequate training and safety equipment provided by MSU.

“She was made to handle dangerous pesticides without proper gloves, protective equipment, breathing masks, or sufficient training,” Green said.

“Wei was placed in a position to handle these harmful substances without protection. She was a foreign student, navigating MSU’s system in a language that was not her own.”

The former Michigan student is seeking $100 million in damages.

“Wei was consistently assured that her activities posed no harm, and she relied on that assurance, only to suffer as a result,” her attorney noted.

Michigan State spokesperson Amber McCann declined to comment on the specifics of Wei’s case.

“While we cannot discuss ongoing litigation, we want to stress that Michigan State prioritizes the health and safety of the campus community,” McCann stated.

“We ensure that necessary training and personal protective equipment are provided in accordance with relevant university policies and state and federal regulations.”

Source: www.nbcnews.com

British Student Jailed for Selling Phishing Kits Tied to £100 Million Scam | Cybercrime News

A 21-year-old student has been sentenced to seven years in jail for designing and distributing online kits responsible for £100 million worth of fraud.

Ollie Holman created phishing kits that replicated the websites of governments, banks, and charities, enabling criminals to steal personal information from unsuspecting victims.

In one instance, the kit was used to create a fake donation page for a charity, resulting in the theft of credit card details from individuals attempting to make contributions.

Based in East Court, northwest London, Holman produced and distributed 1,052 phishing kits targeting 69 organizations across 24 countries. He also offered tutorials on how to use the kits and established a network of nearly 700 contacts. The counterfeit websites included in the kits could store sensitive information such as login credentials and banking details.

It is believed that Holman marketed these kits from 2021 to 2023, earning approximately £300,000, with distribution carried out via the encrypted messaging platform Telegram.

Holman, who pursued a degree in electronics and computer engineering at the University of Kent in Canterbury, laundered the proceeds through a cryptocurrency wallet.

The London Police’s specialized card and payment crime unit initiated an investigation following intelligence from WMC Global regarding the sale of fraud kits online.

Holman was arrested in October 2023, with a search of his university accommodation leading to the seizure of his devices. Despite his arrest, he continued to provide support to kit buyers through his Telegram channel, prompting a re-arrest in May 2024.

Detectives found links between Holman’s computer and the creation of the kits, which were distributed throughout Europe; one kit was tied to a scam totaling around 1 million euros (£870,000).

Holman pleaded guilty to seven charges, including producing materials for fraud, aiding a criminal enterprise, and possessing criminal property. He received a seven-year sentence at Southwark Crown Court.

Following the sentencing, DS Ben Hurley remarked that Holman facilitated extensive global fraud. “The financial losses associated with Holman’s actions are in the millions. Despite his substantial profits from selling the software, he failed to comprehend the harm caused to victims,” he stated.

Sarah Jennings, a specialist prosecutor with the Crown Prosecutor’s Office, expressed her hope that the verdict serves as a warning to other fraudsters. “No matter how advanced your methods are, you cannot conceal yourself behind online anonymity or encrypted platforms,” she commented.

The CPS has indicated plans to return Holman to court to recover the illicit profits he earned from his criminal activities.

Source: www.theguardian.com

University Professors Utilize ChatGPT, Sparking Student Discontent

In February, Ella Stapleton, a senior at Northeastern University, was going over her notes from an organizational behavior class when she stumbled upon something unusual. Was that a ChatGPT question from her professor?

Within a document created by her business professor for a leadership model lesson, she noticed instructions to chat “Expand all areas. More in depth and concrete.” Following these instructions was a list of leadership traits, both positive and negative, complete with definitions and bullet points.

Stapleton texted a classmate.

“Did you see the notes he uploaded to Canvas?” she asked, referring to the university’s software for course materials. “He created it using ChatGPT.”

“OMG STOP,” her classmate responded. “What’s going on?”

Curious, Stapleton began to investigate. She went through the professor’s slides and discovered more signs of AI involvement: inconsistencies in the text, skewed images, and glaring mistakes.

She was frustrated. Given the school’s tuition and reputation, she expected a high-quality education. This course was crucial for her business major. The syllabus clearly prohibited “academic fraudulent activities,” including the misuse of AI and chatbots.

“He tells us not to use it, yet he uses it himself,” she remarked.

Stapleton lodged a formal complaint with Northeastern’s business school, citing the inappropriate use of AI and other concerns about teaching methods, demanding a refund of the tuition for that class, which was over $8,000—about a quarter of her semester’s total.

When ChatGPT launched in late 2022, it created a whirlwind of concern across educational institutions It’s incredibly easy. Students tasked with writing essays could easily let the tool handle it in mere seconds. Some institutions banned it, while others introduced AI detection services, despite concerns about their accuracy.

However, the tide has turned. Nowadays, students are scrutinizing professors for their heavy reliance on AI, voicing complaints on platforms that analyze course content, using terms like “ChatGPT is” essential” and “algorithmic.” They call out hypocrisy and make financial arguments, insisting they deserve instruction from humans—not algorithms they can access for free.

On the other side, professors have claimed they use AI chatbots as a means to enhance education. An instructor interviewed by The New York Times stated that the chatbot streamlined their workload and acted as an automated teaching assistant.

The number of educators using these tools is on the rise. In a National Survey conducted last year, 18% of over 1,800 higher education instructors identified as frequent users of generative AI tools. This year’s follow-up surveys have nearly doubled that figure, according to Tyton Partners, the consultancy behind the study. AI companies are eager to facilitate this shift, with startups like OpenAI and Anthropic recently releasing enterprise versions of chatbots designed specifically for educational institutions.

(The Times is suing OpenAI for copyright infringement, as the company allegedly used news content without permission.)

Generative AI is clearly here to stay, yet universities are grappling with adapting to evolving standards. Professors are navigating this learning curve and, like Stapleton’s instructor, often misinterpret the risks of technology and student negligence.

Last fall, 22-year-old Marie submitted a three-page essay for her online anthropology course at Southern New Hampshire University. Upon checking her grades on the school’s platform, she was pleased to see an A. However, in the comments, her professor made multiple references to using ChatGPT, which included a grading rubric meant for chatbots and a request for “great feedback” for Marie.

“To me, it felt like the professor didn’t even read my work,” Marie shared, asking to remain anonymous. She noted that the temptation to lean on AI in academia was like having a “third job” for many instructors managing numerous students.

Marie confronted her professor during a Zoom meeting about this issue. The professor claimed that they had read her essays but used ChatGPT as an approved guide.

Robert McAuslan, Vice President of AI at Southern New Hampshire, expressed that schools should embrace AI’s potential to revolutionize education, emphasizing guidelines for faculty and students to “ensure this technology enhances creativity rather than replaces it.” A do’s and don’ts were recommended to encourage authentic, human-focused feedback among teachers utilizing tools like ChatGPT and Grammarly.

“These tools should not replace the work,” Dr. McAuslan stated. “Instead, they should enhance an already established process.”

After encountering a second professor who also appeared to provide AI-generated feedback, Marie opted to transfer to another university.

Paul Schoblin, an English professor at Ohio University in Athens, empathized with her frustration. “I’m not a huge fan of that,” Dr. Schoblin remarked after hearing about Marie’s experience. He also holds a position as an AI Faculty Fellow, tasked with developing effective strategies to integrate AI in teaching and learning.

“The real value you add as an educator comes from the feedback you provide to your students,” he noted. “It’s the personal connection we foster with our students, as they are directly impacted by our words.

Though advocating for the responsible integration of AI in education, Dr. Schoblin asserted that it shouldn’t merely simplify instructors’ lives. Students must learn to utilize technology ethically and responsibly. “If mistakes happen, the repercussions could lead to job loss,” he warned.

He cited a recent incident where a Vanderbilt University School of Education official responded to a mass shooting at another university. An email sent to students emphasized community bonds. However, a sentence disclosed that ChatGPT was used to compose it. Students criticized the outsourcing of empathy, prompting involved parties to temporarily resign.

However, not all situations are straightforward. Dr. Schoblin remarked that establishing reasonable rules is challenging, as acceptable AI usage can differ based on the subject. His department’s Centre for Teaching, Learning, and Assessment has instead emphasized principles regarding the integration of AI, specifically eschewing a “one-size-fits-all” algorithm.

The Times reached out to numerous professors whose students had noted AI usage in online reviews. Some instructors admitted to using ChatGPT to create quizzes for computer science programming assignments, even as students reported that these quizzes didn’t always make sense. They also used it for organizing feedback or to make it more positive. As experts in their fields, they noted instances of AI “hallucinations,” where false information was generated.

There was no consensus among them on what practices were acceptable. Some educators utilized ChatGPT to assist students in reflecting on their work, while others denounced such practices. Some stressed the importance of maintaining transparency with students regarding generative AI use, while others opted to conceal their usage due to student wariness about technology.

Nevertheless, most felt that Stapleton’s experience at Northeastern—where her professor appeared to use AI for generating class notes and slides—was unjustifiable. That was Dr. Schoblin’s view, provided the professor edited the AI outputs to fit his expertise. He likened it to the longstanding practice in academia of utilizing content from third-party publishers, such as lesson plans and case studies.

Professors using AI for slide generation are considered “some sort of monsters.” “It’s absurd to me,” he remarked.

Christopher Kwaramba, a business professor at Virginia Commonwealth University, referred to ChatGPT as a time-saving partner. He mentioned that lesson plans that once required days to create could now be completed in mere hours. He employs it to generate datasets for fictional retail chains used in exercises designed to help students grasp various statistical concepts.

“I see it as the age of steroid calculators,” Dr. Kwaramba stated.

Dr. Kwaramba noted that support hours for students are increasing.

Conversely, other professors, such as Harvard’s David Malan, reported that AI diminished student attendance during office hours. Dr. Malan, a computer science professor, integrated a custom AI chatbot into his popular introductory programming course, allowing hundreds of students access for assistance with coding assignments.

Dr. Malan had to refine his approach to ensure that chatbots only offer guidance, not complete answers. Most of the 500 students surveyed in 2023 found the resource beneficial, particularly in its inaugural year.

By freeing up common inquiries about referral materials during office hours, Dr. Malan and his teaching assistant can now focus on meaningful interactions with students, like weekly lunches and hackathons. “These are more memorable moments and experiences,” Dr. Malan reflected.

Katy Pearce, a communications professor at the University of Washington, developed a tailored AI chatbot trained on prior assignments she assessed, enabling students to receive feedback on their writing mimicking her style at any hour, day or night. This is particularly advantageous for those hesitant to seek help.

“Can we foresee a future where many graduate teaching assistants might be replaced by AI?” she pondered. “Yes, absolutely.”

What implications would this have on the future pipeline for professors emerging from the Teaching Assistant ranks?

“That will undoubtedly pose a challenge,” Dr. Pearce concluded.

After filing her complaint with Northeastern, Stapleton participated in several meetings with business school officials. In May, the day after graduation, she learned that her tuition reimbursement wouldn’t be granted.

Her professor, Rick Arrowwood, expressed regret about the incident. Dr. Arrowwood, an adjunct with nearly two decades of teaching experience, spoke about using class materials, claiming that AI tools provided a “fresh perspective” on ChatGPT, search engine confusion, and presentation generators labeled Gamma. Initially, he mentioned that the outputs appeared impressive.

“In hindsight, I wish I had paid closer attention,” he commented.

While he shared materials online with students, he clarified that he had not used them during class sessions, only recognizing the errors when school officials inquired about them.

This awkward episode prompted him to understand that faculty members must be more cautious with AI and be transparent with students about its usage. Northeastern recently established an official AI policy that mandates attribution every time an AI system is employed and requires a review of output for “accuracy and quality.” A Northeastern spokesperson stated that the institution aims to “embrace the use of artificial intelligence to enhance all facets of education, research, and operations.”

“I cover everything,” Dr. Arrowwood asserted. “If my experience can serve as a learning opportunity for others, then that’s my happy place.”

Source: www.nytimes.com

UK universities alerted to impending “stress test” ranking due to 92% student reliance on AI.

UK universities are being advised to thoroughly test all assessments following new research that shows almost all students are using generative artificial intelligence (GENAI) for their research projects.

A study of 1,000 students, both local and international, revealed a significant increase in the use of Genai over the past year. In a survey conducted in 2025, 53% admitted to using tools like CHATGPT, while a staggering 88% reported using such tools.

The percentage of students utilizing AI tools has risen from 66% in 2024 to 92% in 2025, leaving only 8% who do not use AI. A report published by the Institute for Higher Education Policy and Kortext highlighted these findings.

Josh Freeman, the author of the report, emphasized the unprecedented shift in student behavior within a year and urged universities to pay attention to the impact of generative AI in academic settings.

Freeman stated, “There is an urgent need for all assessments to be reviewed to ensure they cannot be easily completed using AI. This calls for a bold retraining effort for staff to understand the power and potential of generative AI.”

Institutions are encouraged to share best practices and address potential issues related to the use of AI tools for learning enhancement rather than hindrance.

Students are using genai for various purposes such as explaining concepts, summarizing articles, and suggesting research ideas. However, 18% of students include AI-generated text directly in their work.

Many students use AI to save time and improve the quality of their work, but concerns about academic misconduct and biased outcomes deter some from using such tools.

Women and students from privileged backgrounds express more apprehension about AI use, while men and STEM students exhibit more enthusiasm. The digital disparity identified in 2024 seems to have widened, particularly in summarizing articles.

Despite concerns, most students believe universities are responding effectively to academic integrity issues related to AI. Training in AI skills is provided to a third of students, but there is ambiguity surrounding the use of AI in academic work.

Dr. Thomas Lancaster from Imperial College London emphasizes the importance of preparing students for the ethical use of AI in education and future careers to avoid a competitive disadvantage.

In response to these findings, a UK spokesperson highlighted the need for universities to equip students for a world influenced by AI while addressing the challenges posed by rapidly evolving technologies. They stress the importance of upholding academic integrity and educating students about the consequences of fraud from the beginning.

Source: www.theguardian.com

Experts Uncover the Key to Student Success in Education

Research by the University of South Australia and its partners shows that increasing student engagement with complex learning tasks significantly improves critical thinking and problem-solving skills. This study suggests that teachers should focus on deep learning techniques to improve student outcomes.

High engagement, high returns. This is advice from education experts at the University of South Australia for teachers looking to improve student performance.

In a new study conducted in partnership with
flinders university
Researchers from the Melbourne School of Education found that fewer than a third of teachers engage students in complex learning, limiting students’ opportunities to develop critical thinking and solve problems. Did.

Researchers who filmed and assessed classroom content in South Australia and Victoria found that nearly 70% of student assignments consisted of simple questions and answers and notes, rather than activities that engaged students on a deeper level. I found that it is related to superficial learning such as taking things and listening to the teacher.

Emphasis on deep learning

UniSA researcher Dr Helen Stevenson said teachers needed more support to plan interactive and constructive lessons that foster deep learning.

“When it comes to learning, the greater the engagement, the deeper the learning. But too often, students are not very active and do passive work,” says Dr. Stevenson. Masu

“Our research suggests that about 70% of classroom content may be ‘passive’ (students have little observable input), or doing something simple like answering questions on a fact sheet. was considered to be “active”. While there is certainly a place for such tasks in the classroom, student learning is greatly enhanced when students spend more time doing complex activities that promote deep conceptual learning. Deep learning requires organizing knowledge into conceptual structures, which has been shown to improve information retention and improve learning outcomes. Deep learning also supports the knowledge needed for innovation. Making small changes to teachers’ existing lesson plans and instruction can significantly increase student engagement, which in turn improves overall outcomes. ”

She continues: “At a basic level, teachers need to consider how they can adjust existing classroom activities to place more tasks deeper into the learning scale. For example, suppose you watch a video. . Students can watch the video silently (this is “passive”). Watch the video and take notes using the presenter’s words (this is considered “active”). Write any questions that arise while watching the video (this is “constructive”). Or watch the video and discuss it with other students to generate different ideas (this is ‘interactive’). Interactive classroom engagement involves students participating in activities with other students and receiving stimulation that fosters deeper understanding. They make judgments, propose and criticize arguments and opinions, and come up with solutions to problems. These activities also help develop critical thinking and reasoning skills. All of these are predictive of learning gains. ”

Survey results regarding teacher awareness

Interestingly, one of the study’s key findings is that many teachers do not recognize or fully appreciate the importance of how classroom assignments can stimulate different modes of student participation. It seems like it hasn’t.

“Simply changing class activities from ‘active’ to ‘constructive’ can go a long way in improving student learning,” says Dr. Stevenson.

“Teachers should be supported to engage in professional development to shift their thinking to practices that support deeper learning and better outcomes for students.”

References: “Using the Extended ICAP-Based Coding Guide as a Framework for Analyzing Classroom Observations,” by Stella Vosniadou, Michael J. Lawson, Erin Bodner, Helen Stephenson, David Jeffries, and I Gusti Ngurah Darmawan; April 13, 2023 Education and teacher education.
DOI: 10.1016/j.tate.2023.104133

This research was funded by the Australian Research Council.

Source: scitechdaily.com