Elon Musk Takes Legal Action Against Apple Over Open Ally, Sparking Feud with Sam Altman

Elon Musk has threatened to take legal action against Apple on behalf of the AI startup Xai, alleging that the iPhone manufacturer is favoring OpenAI and breaching antitrust laws regarding App Store rankings. This statement drew a sharp response from OpenAI CEO Sam Altman and ignited a feud between the two former business associates at X.

“Apple is operating in a manner that prevents non-OpenAI AI companies from achieving the top position on the App Store. This clearly violates antitrust regulations. Xai is prepared to take swift legal measures,” Musk declared in a post on X.

In another post that day, he stated:

Currently, OpenAI’s ChatGPT occupies the top spot in the “Top Free Apps” category of the US App Store, while Xai’s Grok sits in fifth place. Apple has partnered with OpenAI to integrate ChatGPT across iPhone, iPad, and Mac. Neither Apple nor Xai provided any comments.

Altman replied to Musk on X, saying, “This is an unexpected claim considering we’ve heard Elon is attempting to manipulate X for his own benefit and to undermine his competitors, including those he dislikes.” Reports indicate that Musk has tweaked X’s algorithm to favor his own posts.

Altman and Musk co-founded OpenAI in 2015, but Musk departed the startup in 2018 and withdrew his funding after proposing they take control. Musk has since filed two times for a planned shift to commercial entities, alleging “Shakespeare’s proportional deception ceit.” Altman has characterized Musk as a bitter and envious ex-partner, resentful of the company’s achievements post-departure.

Musk responded to Altman’s tweet, stating, “You got 3 million views for dishonest posts. You’re a liar; despite having 50 times your followers, my engagement has far exceeded yours!”

Altman retorted to Musk several times, initially calling the lack of views a “skill issue” or “bot-related” before posing legal questions.

Users on X highlighted through the Community Notes feature that several apps, aside from OpenAI, have claimed top positions on the App Store this year.

For instance, the Chinese AI app Deepseek reached the No. 1 position in January, while Perplexity ranked first in the App Store in India in July.

One user inquired about Grok, X’s native AI. The chatbot replied: “Based on confirmed evidence, Sam Altman is correct.”

Skip past newsletter promotions

Musk’s remarks come as regulators and competitors heighten their scrutiny of Apple’s App Store dominance.

Earlier this year, an EU antitrust regulator ordered Apple to pay a fine of 500 million euros ($581.15 million).

In early 2024, the U.S. Department of Justice filed an antitrust lawsuit against Apple, accusing the iPhone manufacturer of establishing and maintaining “broad, persistent, and illegal” monopolies in the smartphone market.

Source: www.theguardian.com

University Professors Utilize ChatGPT, Sparking Student Discontent

In February, Ella Stapleton, a senior at Northeastern University, was going over her notes from an organizational behavior class when she stumbled upon something unusual. Was that a ChatGPT question from her professor?

Within a document created by her business professor for a leadership model lesson, she noticed instructions to chat “Expand all areas. More in depth and concrete.” Following these instructions was a list of leadership traits, both positive and negative, complete with definitions and bullet points.

Stapleton texted a classmate.

“Did you see the notes he uploaded to Canvas?” she asked, referring to the university’s software for course materials. “He created it using ChatGPT.”

“OMG STOP,” her classmate responded. “What’s going on?”

Curious, Stapleton began to investigate. She went through the professor’s slides and discovered more signs of AI involvement: inconsistencies in the text, skewed images, and glaring mistakes.

She was frustrated. Given the school’s tuition and reputation, she expected a high-quality education. This course was crucial for her business major. The syllabus clearly prohibited “academic fraudulent activities,” including the misuse of AI and chatbots.

“He tells us not to use it, yet he uses it himself,” she remarked.

Stapleton lodged a formal complaint with Northeastern’s business school, citing the inappropriate use of AI and other concerns about teaching methods, demanding a refund of the tuition for that class, which was over $8,000—about a quarter of her semester’s total.

When ChatGPT launched in late 2022, it created a whirlwind of concern across educational institutions It’s incredibly easy. Students tasked with writing essays could easily let the tool handle it in mere seconds. Some institutions banned it, while others introduced AI detection services, despite concerns about their accuracy.

However, the tide has turned. Nowadays, students are scrutinizing professors for their heavy reliance on AI, voicing complaints on platforms that analyze course content, using terms like “ChatGPT is” essential” and “algorithmic.” They call out hypocrisy and make financial arguments, insisting they deserve instruction from humans—not algorithms they can access for free.

On the other side, professors have claimed they use AI chatbots as a means to enhance education. An instructor interviewed by The New York Times stated that the chatbot streamlined their workload and acted as an automated teaching assistant.

The number of educators using these tools is on the rise. In a National Survey conducted last year, 18% of over 1,800 higher education instructors identified as frequent users of generative AI tools. This year’s follow-up surveys have nearly doubled that figure, according to Tyton Partners, the consultancy behind the study. AI companies are eager to facilitate this shift, with startups like OpenAI and Anthropic recently releasing enterprise versions of chatbots designed specifically for educational institutions.

(The Times is suing OpenAI for copyright infringement, as the company allegedly used news content without permission.)

Generative AI is clearly here to stay, yet universities are grappling with adapting to evolving standards. Professors are navigating this learning curve and, like Stapleton’s instructor, often misinterpret the risks of technology and student negligence.

Last fall, 22-year-old Marie submitted a three-page essay for her online anthropology course at Southern New Hampshire University. Upon checking her grades on the school’s platform, she was pleased to see an A. However, in the comments, her professor made multiple references to using ChatGPT, which included a grading rubric meant for chatbots and a request for “great feedback” for Marie.

“To me, it felt like the professor didn’t even read my work,” Marie shared, asking to remain anonymous. She noted that the temptation to lean on AI in academia was like having a “third job” for many instructors managing numerous students.

Marie confronted her professor during a Zoom meeting about this issue. The professor claimed that they had read her essays but used ChatGPT as an approved guide.

Robert McAuslan, Vice President of AI at Southern New Hampshire, expressed that schools should embrace AI’s potential to revolutionize education, emphasizing guidelines for faculty and students to “ensure this technology enhances creativity rather than replaces it.” A do’s and don’ts were recommended to encourage authentic, human-focused feedback among teachers utilizing tools like ChatGPT and Grammarly.

“These tools should not replace the work,” Dr. McAuslan stated. “Instead, they should enhance an already established process.”

After encountering a second professor who also appeared to provide AI-generated feedback, Marie opted to transfer to another university.

Paul Schoblin, an English professor at Ohio University in Athens, empathized with her frustration. “I’m not a huge fan of that,” Dr. Schoblin remarked after hearing about Marie’s experience. He also holds a position as an AI Faculty Fellow, tasked with developing effective strategies to integrate AI in teaching and learning.

“The real value you add as an educator comes from the feedback you provide to your students,” he noted. “It’s the personal connection we foster with our students, as they are directly impacted by our words.

Though advocating for the responsible integration of AI in education, Dr. Schoblin asserted that it shouldn’t merely simplify instructors’ lives. Students must learn to utilize technology ethically and responsibly. “If mistakes happen, the repercussions could lead to job loss,” he warned.

He cited a recent incident where a Vanderbilt University School of Education official responded to a mass shooting at another university. An email sent to students emphasized community bonds. However, a sentence disclosed that ChatGPT was used to compose it. Students criticized the outsourcing of empathy, prompting involved parties to temporarily resign.

However, not all situations are straightforward. Dr. Schoblin remarked that establishing reasonable rules is challenging, as acceptable AI usage can differ based on the subject. His department’s Centre for Teaching, Learning, and Assessment has instead emphasized principles regarding the integration of AI, specifically eschewing a “one-size-fits-all” algorithm.

The Times reached out to numerous professors whose students had noted AI usage in online reviews. Some instructors admitted to using ChatGPT to create quizzes for computer science programming assignments, even as students reported that these quizzes didn’t always make sense. They also used it for organizing feedback or to make it more positive. As experts in their fields, they noted instances of AI “hallucinations,” where false information was generated.

There was no consensus among them on what practices were acceptable. Some educators utilized ChatGPT to assist students in reflecting on their work, while others denounced such practices. Some stressed the importance of maintaining transparency with students regarding generative AI use, while others opted to conceal their usage due to student wariness about technology.

Nevertheless, most felt that Stapleton’s experience at Northeastern—where her professor appeared to use AI for generating class notes and slides—was unjustifiable. That was Dr. Schoblin’s view, provided the professor edited the AI outputs to fit his expertise. He likened it to the longstanding practice in academia of utilizing content from third-party publishers, such as lesson plans and case studies.

Professors using AI for slide generation are considered “some sort of monsters.” “It’s absurd to me,” he remarked.

Christopher Kwaramba, a business professor at Virginia Commonwealth University, referred to ChatGPT as a time-saving partner. He mentioned that lesson plans that once required days to create could now be completed in mere hours. He employs it to generate datasets for fictional retail chains used in exercises designed to help students grasp various statistical concepts.

“I see it as the age of steroid calculators,” Dr. Kwaramba stated.

Dr. Kwaramba noted that support hours for students are increasing.

Conversely, other professors, such as Harvard’s David Malan, reported that AI diminished student attendance during office hours. Dr. Malan, a computer science professor, integrated a custom AI chatbot into his popular introductory programming course, allowing hundreds of students access for assistance with coding assignments.

Dr. Malan had to refine his approach to ensure that chatbots only offer guidance, not complete answers. Most of the 500 students surveyed in 2023 found the resource beneficial, particularly in its inaugural year.

By freeing up common inquiries about referral materials during office hours, Dr. Malan and his teaching assistant can now focus on meaningful interactions with students, like weekly lunches and hackathons. “These are more memorable moments and experiences,” Dr. Malan reflected.

Katy Pearce, a communications professor at the University of Washington, developed a tailored AI chatbot trained on prior assignments she assessed, enabling students to receive feedback on their writing mimicking her style at any hour, day or night. This is particularly advantageous for those hesitant to seek help.

“Can we foresee a future where many graduate teaching assistants might be replaced by AI?” she pondered. “Yes, absolutely.”

What implications would this have on the future pipeline for professors emerging from the Teaching Assistant ranks?

“That will undoubtedly pose a challenge,” Dr. Pearce concluded.

After filing her complaint with Northeastern, Stapleton participated in several meetings with business school officials. In May, the day after graduation, she learned that her tuition reimbursement wouldn’t be granted.

Her professor, Rick Arrowwood, expressed regret about the incident. Dr. Arrowwood, an adjunct with nearly two decades of teaching experience, spoke about using class materials, claiming that AI tools provided a “fresh perspective” on ChatGPT, search engine confusion, and presentation generators labeled Gamma. Initially, he mentioned that the outputs appeared impressive.

“In hindsight, I wish I had paid closer attention,” he commented.

While he shared materials online with students, he clarified that he had not used them during class sessions, only recognizing the errors when school officials inquired about them.

This awkward episode prompted him to understand that faculty members must be more cautious with AI and be transparent with students about its usage. Northeastern recently established an official AI policy that mandates attribution every time an AI system is employed and requires a review of output for “accuracy and quality.” A Northeastern spokesperson stated that the institution aims to “embrace the use of artificial intelligence to enhance all facets of education, research, and operations.”

“I cover everything,” Dr. Arrowwood asserted. “If my experience can serve as a learning opportunity for others, then that’s my happy place.”

Source: www.nytimes.com

Amazing Fireballs Light Up the Sky in Mexico City, Sparking Awe and Memes Galore

Bright objects falling from space lit up the sky in the Mexican capital around 3am on Wednesday, spreading over plains, volcanoes, and small towns.

Videos of a fireball that streaked across a Latin American country and exploded into a burst of light in Mexico City captured the attention of many.

“No, the meteorite that exploded last night is not a reason to reach out to your ex,” someone tweeted. Meteor shower.

Soon, the internet was filled with edited images of fireballs featuring cartoon characters and political jokes.

Bright objects illuminate the sky in Mexico City early on Wednesday.webcamsmx via AP

Mexican scientists quickly realized that the object streaking across the sky was not a meteorite but a bolido. This was Bolido.

Bolido, as defined by NASA, is “a very bright meteor that is spectacular enough to be seen over a large area.”

Mario Rodriguez, a space science researcher at the National Autonomous University of Mexico, explained that it could be classified as a meteor or a fragment of one.

Bolido, resembling a shooting star, creates a fire as it descends through the Mexican skies in the early hours of Wednesday.

“Due to the high pressure on the object, they begin to flare up with their trailing tails and emit light,” Rodriguez stated, part of a team of scientists analyzing the video that amazed many Mexicans. He added that unlike meteorites impacting the Earth, bolidos disintegrate in the atmosphere.

According to him, this particular meteor was around five feet long and posed no danger to the public.

Source: www.nbcnews.com

Proposed phone bill for young teens faces opposition from government ministers, sparking safety concerns

After facing opposition from education secretaries Peter Kyle and Bridget Phillipson, the bill seeking to ban addictive smartphone algorithms targeting young teenagers was weakened.

The Safer Phone Bill, introduced by Labour MP Josh McAllister, is set to be discussed in the Commons on Friday. Despite receiving support from various MPs and child protection charities, the government has opted to further investigate the issue rather than implement immediate changes.

Government sources indicate that the new proposal will be accepted, as the original bill put forward by McAllister did not receive ministerial support.

The government believes more time is needed to assess the impact of mobile phones on teenagers and to evaluate emerging technologies that can control the content produced by phone companies.

Peter Kyle opposes the major bill, which would have been the second online safety law some advocates were hoping for.

Although not fundamentally against government intervention on this issue, a source close to Kyle mentioned that the work is still in its early stages.

The original proposal included requirements for social media companies to exclude young teens from their algorithms and limit addictive content for those under 16. However, these measures were removed from the final bill.

Another measure to ban mobile phones in schools was also dropped after objections from Bridget Phillipson, who believes schools should self-regulate. There are uncertainties regarding potential penalties for violations.

Health Secretary Wes Streeting has been vocal about addressing the issue of addictive smartphones, publicly supporting McAllister’s bill.

The revised Private Membership Bill instructs Chief Medical Officer Chris Whitty to investigate the health impacts of smartphone use.


McAllister hopes that the bill will prompt the government to address addictive smartphone use among children more seriously, rather than just focusing on harmful or illegal content.

If the Minister commits to adopting the new measures as anticipated, McAllister will not push for a vote on the bill.

The government has pledged to “publish a research plan on the impact of social media use on children” and seek advice from the UK’s chief medical officer on parents’ management of their children’s smartphone and social media usage.

Polls indicate strong public support for measures restricting young people’s use of social media, with a majority favoring a ban on social media for those under 16.

Source: www.theguardian.com

Google Calendar removes Black History Month, Pride and other cultural events sparking controversy

Google’s online and mobile calendars no longer feature Black History Month, Women’s History Month, and LGBTQ+ Holidays.

Previously, the world’s largest search engine acknowledged the beginning of Black History Month in February and Pride Month in June, but they will not be included in 2025.

The removal of these holidays was first reported by The Verge last week.

Google spokesperson Madison Cushman Veld shared a statement with The Guardian stating that the listed holidays were not “sustainable” for the model.

“A few years ago, the calendar team started manually adding broader cultural moments in many countries worldwide. It was noted that several other events and countries were missing, making it unsustainable to maintain hundreds of moments globally. So, in mid-2024, we decided to only display public holidays and national compliance from Timeanddate.com worldwide, allowing users to manually add other important moments,” the statement said.

The decision to remove black, LGBTQ+ and women’s holidays is another change by Google following Donald Trump’s second presidency.


Recently, Google announced a rollback of previous commitments to diversity, equity, and inclusion (DEI) initiatives in employment policy following an order by the US President to end DEI in federal agencies.

Google also revealed that US users will now be referred to as “American Gulf,” following an executive order by Trump to rename Alaska’s mountains to “Mount McKinley”. The company announced the name change for US users will take effect on Monday.

Many users on social media have expressed disappointment and frustration at Google’s latest decision. Users who wish to track events like Pride Month, Black History Month, and Indigenous Month will need to manually add them to their calendar.

Google assured The Guardian that changes to the calendars will not impact future Google Doodles, which typically celebrate these events with digital artwork on the website’s homepage. The company stated, “Google continues to actively celebrate and promote our cultural moments as a company,” and offers a Black History Month Playlist on YouTube Music.

Source: www.theguardian.com

Britishcore: TikTok trend celebrates sausage rolls and Oasis, sparking interest in British culture!

When you think of British cultural exports in the 21st century, familiar examples like James Bond, Downton Abbey, and Adele might spring to mind.

But in the algorithm-driven world of TikTok, where a trend known as “Britishcore” has become one of the most sought-after movements right now, everyday aspects of British life are becoming a hot topic.

British Core is Cultural Terms At the turn of the decade it was used to depict rundown pubs, lonely traffic cones and other symbols of the bleakness of British life.

Today, it has expanded to include Trainspotting-inspired videos, lip-syncing from the stars of Twilight Nights, and a satirical celebration of the Oasis reunion.

TikTok points to growing interest in British fashion, comedy, and travel on the platform as evidence of a renewed interest in British culture and its typically satirical take on it.

The trend has proven so popular that even international content creators are joining in, eager to show just how Britishcore their content is.

One notable example is American DJ The Dare. A jokey video of himself At Paddington Station, Ewan McGregor’s opening monologue from Trainspotting plays, with Born Slippy from Underworld playing in the background.

The Dare posted the video, which has been viewed 245,000 times, with the slogan “British Max”.




The Dare filmed themselves in Paddington bearing the slogan “British Max”, set to Ewan McGregor’s opening monologue from the film Trainspotting and a soundtrack of Underworld’s “Born Slippy”.
Photo: Theo Wargo/Getty Images via NYFW: The Shows

Another video saw US cinema staff lip-synching to a clip of Gemma Collins from the film The Only Way is Essex, while an Australian radio host posted promoting an Oasis reunion, which has been viewed 3.7 million times.

In one TikTok US content creator @the_quivey10 has compiled a list of things he’d like to do if he were in the UK, including everyday activities made popular on Britishcore TikTok, like doing a “cheeky Tescoran” and getting a Greggs sausage roll.

TikTok said it has seen double-digit increases in posts using the hashtags #ukcomedy, #ukfashion, and #uktravel since January, and that the #OasisReunion video has been viewed more than 100 million times in the past two weeks.

“This summer, British pop culture exploded onto the global stage,” said Louisa McGillicuddy, TikTok’s UK trends expert. “From the Brat phenomenon to the excitement over the Oasis reunion… TikTok communities both in the UK and internationally have embraced all things Britcore.”




American content creator @the_quivey10 has a bucket list for when he visits the UK, which includes eating a Greggs sausage roll.
Photo: Newscast/UIG/Getty Images

TikTok, which has more than one billion users globally, said interest in Britishcore content was reflected in the popularity of The Killers’ videos. Performing Mr Brightside in front of a London audience Collins and Gary Barlow, regulars on the Love of Hands TikTok account, posted the meme following England’s victory in the Euro 2024 semi-final. TikTok said a video of the Take That singer in a vineyard saying “this is my idea of how to spend a pretty lovely day” has become a popular meme overseas.

Alwyn Turner, a senior lecturer at the University of Chichester and an expert on British popular culture, said a common thread among some of Britain’s most popular cultural exports was a sense of “cheekiness”.

Turner also pointed out how increased interest in British culture could benefit the national mood.

“As a citizen, when you achieve fame in America, it gives you a sense of optimism. It makes the country feel alive and vibrant. There’s a certain feel-good feeling in Britain when the world wants you,” he said.

The British singer and her eponymous band, whose hits include “Smooth Operator” and “No Ordinary Love,” haven’t released an album since 2010. But TikTok has maintained interest in Sade’s music, with clips featuring her songs up 63%.

The singer’s looks have also become popular on the platform. 1 mood board clip The video has garnered nearly 5 million views, and the hashtag #sadegirl has also recently become popular on the platform.

A combination of travel trends and aesthetic sensibility has made the Northwestern United States popular on TikTok. Short slideshow And there are video edits capturing the region’s atmospheric woodland scenery. An account dedicated to the trend, @throughthepnw, has 1.6 million followers.

Food is a popular genre on TikTok, and Filipino cuisine has been gaining attention recently, in part due to interest in “boodle fights,” communal banquets in which participants eat with their bare hands at tables covered with banana leaves.

This trend supports playing easy, non-violent video games such as “Wild Flowers,” which features farming and magic, and “Moonstone Island,” a game where you collect creatures. There is also a rise in “deskscapes,” which create a relaxing gaming environment with plants and indirect lighting.

Educational influencers in fields such as history and science are becoming increasingly popular on TikTok. One example is Katie Kennedy (@thehistorygossip), a content creator who takes an unconventional approach to history education. One title is “Were people having sex during the plague?”, another is “Why did these royals enjoy pure body odor?”. Although she only started on TikTok in January 2024 while in her final year of university, Kennedy’s page has over 500,000 followers and 13.9 million likes. Her debut book, History Gossip: Was Anne of Cleves a Beggar? And 365 Other Historical Curiositieswill be released on October 7th.

Source: www.theguardian.com