How Arc Raiders’ Generative AI Sparked a Battle for the Future of Gaming

Arc Raiders stands as a strong contender for game of the year, especially in late-game discussions. Set in a multiplayer environment teeming with hostile drones and military robots, players must navigate a world where trust is scarce—will you risk cooperating with other raiders trying to return to humanity’s underground safety, or will they ambush you for your hard-earned spoils? Interestingly, the majority of gamers I’ve spoken to suggest that humanity is, for the most part, choosing unity over conflict.

In a recent Game Spot review, Mark Delaney offers an intriguing perspective on Ark Raiders’ capacity for narrative and camaraderie, noting its unexpectedly optimistic outlook when compared to other multiplayer extraction shooters. “In Ark Raiders, while players can eliminate one another, it’s not indicative of a grim future for humanity; the fact that most choose to help each other instead is a testament to its greatness as a multiplayer experience.”

However, it’s worth noting a layer of irony within the narrative of humanity banding together against machines. The game utilizes AI-generated text-to-speech, developed from real actors’ performances, and also employs machine learning to refine the enemy robots’ behavior and animations. Writer Rick Lane voiced ethical concerns over this: “For Ark Raiders to capitalize on human social instincts while simultaneously reassembling the human voice through technology, disregarding the essence of human interaction, reflects a troubling lack of artistic integrity,” he wrote in an Eurogamer article.

The increasing use of generative AI in game development has become a contentious issue among players (though gauging actual feelings remains challenging). Many players, including myself, find this trend uncomfortable. Last week, the latest Call of Duty faced backlash for allegedly using AI-generated art, which has drawn significant ire. Advocates for generative AI argue it empowers smaller developers; however, Call of Duty is a multibillion-dollar franchise that can afford to employ skilled artists. The same logic applies to the AI-generated voice lines in Ark Raiders.

This raises existential questions for those within the gaming industry—artists, writers, voice actors, and programmers alike may face obsolescence due to technology that replaces expensive talent with cheaper, less capable machines. EA has mandated that its employees utilize in-house AI tools. Such policies are widely criticized. Krafton has boldly branded itself as an AI-first developer while offering voluntary resignation to its South Korean employees. Voluntary layoffs have been introduced as well.

Controversy ensues… Call of Duty: Black Ops 7 has faced accusations of using AI-generated art. Photo: Activision

Interestingly, those defending generative AI in gaming predominantly belong to the corporate sector rather than everyday players or developers. Tim Sweeney from Epic Games (notably wealthy) expressed his thoughts on Eurogamer’s Ark Raiders review on X, lamenting the infusion of “politics” into video game evaluations, and suggesting a future where games utilize endless personalized dialogue crafted from human performances.

Personally, I prefer human-crafted dialogue over AI-generated lines. I want characters to express sentiments that resonate with human experiences, delivered by actors who grasp the emotional depth. Award-winning voice actor Jane Perry remarked in an interview with GamesIndustry.biz, “Will a robot be on stage accepting the Best Performance award at the gaming or BAFTA awards? I believe audiences would overwhelmingly favor authentic human performances. However, the ambition to replace humans with machines is a powerful driving force among the tech elite.”

Through years of covering this industry, I’ve realized that the dynamics in the gaming world often reflect broader societal trends. A few years back, there was a spike in investments in Web3 and NFT gaming, which ultimately led to a collapse due to their unattractive, computer-generated aesthetics. When big tech latched onto the “metaverse” concept, gaming companies had already been developing improved iterations for years. Additionally, Gamergate illustrated how to weaponize discontented youth, influencing both political strategy and current cultural conflicts. Hence, anyone concerned about AI’s ramifications on work and society should remain vigilant to the waves the technology creates among players and developers alike—these could serve as intriguing indicators.

What we’re witnessing appears to be a familiar clash between creators and those who benefit from their work. Moreover, players are beginning to challenge whether they should pay the same price for games that feature low-quality, machine-generated visuals and sounds. New conversations are emerging regarding which applications of AI are culturally and ethically permissible.

What to play

A plot with few travelers… Goodnight Universe. Photo: Nice Dream/Skybound Games

From the creators of the poignant ‘Before Your Eyes,’ Goodnight Universe allows you to experience the world through a super-intelligent six-month-old baby endowed with extraordinary abilities. The narrative unfolds through the baby’s internal dialogue. Young Isaac believes he possesses wisdom beyond his age, yet struggles to convey his thoughts and emotions to his family. Soon, he discovers telekinetic powers and the ability to read minds, catching the unwanted attention of others. If equipped with a webcam, players can interact by looking around and blinking. This game delivers an emotional narrative and explores themes that resonate deeply, refreshing nostalgic memories of my own children as infants.

Available: PC, Nintendo Switch 2, PS5, Xbox
Estimated play time:
3-4 hours

What to read

A first look… Benjamin Evan Ainsworth as Link and Beau Bragason as Zelda in the upcoming “The Legend of Zelda” movie set for 2027. Photo: Nintendo/Sony
  • Nintendo has shared the first image from the forthcoming Legend of Zelda movie, featuring Beau Bragason and Benjamin Evan Ainsworth enjoying a serene moment in a meadow. Here, Link bears a striking resemblance to his Ocarina of Time appearance. I was pleased to see that Princess Zelda wields a bow, suggesting she will be an active participant in the action rather than a mere damsel in distress.

  • Nominees for the upcoming Game Award include Ghost of Yorei, Claire Obscur: Expedition 33, and Death Stranding 2. (Traditionally, The Guardian has been the voting platform, but a change will occur this year.) As we reported last week, the annual event has recently discontinued its Future Class program for emerging developers, which felt more like a marketing tactic.

  • A team of modders has revived Sony’s notorious failed shooter Concorde from the dead – however, the company issued a takedown notice for gameplay footage shared on YouTube, even though the server continues to operate.

Skip past newsletter promotions

What to click

Question block

A fantasy realm… The Elder Scrolls: Cyrodiil from Oblivion. Photo: Bethesda Game Studio

This week’s question from leader Jude:

“I recently started playing No Man’s Sky. This is the first game that has felt like it could actually happen. Ready Player One, combined with the now ubiquitous Japanese isekai genre where characters enter alternate worlds. Does anyone else play this game? Can I actually live there?”

I had similar feelings when I first explored Oblivion two decades ago. It might sound amusing now that I play the remastered version, but at that time, it contained everything I desired: vibrant towns, delicious food and literature, interesting characters, magical creatures, and the allure of combat. If given the chance, I would absolutely reside in Cyrodiil from The Elder Scrolls (shown above). Although smaller compared to modern open-world titles, I find there’s no need for an overwhelmingly vast world while immersing in a fantasy escape—we seek an engaging experience without excessive complexity.

There are definitely virtual realms I would not want to inhabit—like the perilous lands of World of Warcraft’s Azeroth, or the chaotic Mushroom Kingdom, not to mention Elden Ring’s vibrant yet overwhelming Land Between. Meanwhile, Hyrule feels rather desolate, while the engaging nature of No Man’s Sky arises from its player interactions.

I’ll throw this question out to my readers: Is there a video game world you’d like to call home?

If you have questions for the Question Block or feedback on the newsletter, please reply or email us at pushbuttons@theguardian.com.

Source: www.theguardian.com

Experts Call for Overhaul of A-levels and GCSEs to Adapt to Generative AI in Education

Oral assessments, enhanced security protocols, and quicker evaluations are all on the agenda as Generated Artificial Intelligence (AI) is poised to redefine the future of student examinations.

As the 2025 exam season draws to a close, AI is already making waves following the announcement of GCSE results, with students primarily relying on conventional pen-and-paper methods for their exams.

With a transformation in exam preparation underway, students are increasingly turning to personalized AI tutors that generate study materials tailored to their specific needs, potentially leading to improved results.

“Thanks to AI, students can ask questions outside of class or at unconventional times without fear of judgment, which enhances their understanding.

“This trend really accelerated over the summer,” noted Sandra Leaton Gray, a professor of education futures at the University of London Institute of Education. “Students can discuss the marking criteria, upload their work, and run sample answers through the AI. They can even ask, ‘How can I enhance my answer?’ It’s like having an unending tutor.”

Some experts argue that as AI continues to evolve rapidly, a completely new exam format will be necessary to evaluate how effectively students are utilizing it. Dr. Thomas Lancaster, a computer scientist at Imperial College London specializing in generative AI and academic integrity, remarked, “This type of examination feels inevitable at this point.”

Lancaster cautioned that AI could facilitate new forms of cheating. “We need to enhance security measures in exams and provide more training to help identify banned devices,” he stated.

“Currently, communication devices can be as discreet as hidden earpieces, and AI-enabled smart glasses introduce even more hazards.”

Sir Ian Buckham, the chief regulator of the UK’s qualification authority, highlighted the risks AI poses to using extended writing assessments for evaluating student knowledge.

In a conversation with the Guardian, he expressed concerns about the qualifications associated with the expansion project, noting that students engaged in independent research could combine this with A-levels, which is equivalent to half of an A-level.

“I believe it holds significant importance, and universities have indicated they value it, too,” he said. “I wouldn’t want to take drastic actions, but I am concerned about how extensively AI will support students in this qualification.”

“Anyone advocating for a shift away from comprehensive testing systems that control AI usage will encounter a much more challenging situation.”

Rogoyski echoed these concerns, stating:

“Whether it’s AI or human, the exam format must change to emphasize assessing comprehension of the material. This could involve Vivas or discussions on the examined topics.”

He also cautioned that as students increasingly integrate technology into their daily lives, early indications of AI addiction are surfacing.

On the potential advantages of AI for the testing system, Jill Duffy, chairperson of the Qualifications Committee and CEO of OCR Awards, mentioned that the examination board is exploring ways in which AI could accelerate and enhance the quality of evaluations.

One possibility is that GCSE and A-level results may be delivered within a month instead of two. OCR is currently utilizing AI in its trials to convert handwritten responses into digital text, aiming to minimize delays due to illegible handwriting. If successful, this could mean that students receive university placements based on their qualifications, rather than predicted grades.

Duffy noted that increased use of Vivas and alternative forms of oral assessment are already prevalent in higher education. “If we see this happening there, could it start to be adopted in schools? It’s a possibility,” she said.

Lancaster concluded: “Overall, exams are here to stay in some form, but the nature of those exams may differ significantly from how they currently appear.”

Source: www.theguardian.com

Enterprise CIOs are hesitant to embrace generative AI technology

Hearing the vendor hype, enterprise buyers might think they’re all in when it comes to generative AI. But as with any new technology, large companies tend to tread carefully. Throughout this year, CIOs have been paying attention as vendors have eagerly announced new generation products powered by his AI.

Some companies are actually looking at reducing spending, or at least smoothing out spending, and are not necessarily looking for new ways to spend. The big exception is when technology allows companies to operate more efficiently and accomplish more with less.

Generative AI certainly has the potential to do that, but it either increases the cost of these features in a SaaS product or the cost of using a large language model API if you build your own. It also comes with its own costs, such as how much it costs. Software internally.

Either way, it’s important for those implementing the technology to understand whether they’re getting a return on their investment. Many companies are proceeding cautiously, with 56% of respondents reporting that generative AI is impacting their investment priorities, according to a July Morgan Stanley survey of CIOs of large companies. However, only 4% of people actually launched any significant projects. In fact, most were still in the evaluation or proof-of-concept stage. This may be a rapidly changing area, but it’s also consistent with what we heard in our conversations with CIOs.

That said, similar to the consumerization of IT a decade ago, CIOs are under pressure to deliver the kind of experience people see when they play ChatGPT online, says Madrona Ventures Partner says Jon Turow.

“I think it’s undeniable that all of our corporate employees, who are internal customers of CIOs and CTOs, have tried ChatGPT and know how great it is. , and know where the great words are. So CIOs are under pressure to achieve that level,” Turow told TechCrunch.

Particularly where some of the pressure may be coming from the CEO, the desire to please internal customers and potentially transformative things like generative AI. There is also a tension between CIOs’ natural tendency to act cautiously. Jim Rowan, a principal at Deloitte, said that making this happen requires building some structure and organization over time, and how to build generative AI across the enterprise in an organized way. He said he is working with customers.

“A lot of the way we work with companies is to think about what infrastructure they need to be successful. Infrastructure doesn’t necessarily mean technology, but people. Who is that, what is the process and governance…and giving them the ability to set it up,” Rowan said. A big part of that is talking about use cases and how the technology can be used to address specific problems.

This is consistent with how the CIOs we spoke to are implementing this in their organizations. Monica Caldas, her CIO at insurance company Liberty Mutual, started with a proof of concept for a few thousand people and is looking for ways to scale it at her 45,000-employee company.

“We know that generative AI will continue to play a critical role in virtually every part of our company. We are investing in use cases to further develop and refine them,” she said. she said.

Mike Haney, CIO of Battelle, a science and technology-focused company, is also exploring generative AI use cases this year. “So we’ve been working on advancing AI for the past six to nine months, and we’re currently building out specific use cases for different teams and functions within the company.” Although it is still early and they are still exploring ways in which it can help, they caution that so far the results have been good in terms of providing more efficient methods.

Kathy Kay, executive vice president and CIO of financial services firm Principal Financial Group, says her company started from scratch with a research group. “So we opened it up to any employee with an interest or passion, and the number grew to about 100 people. It’s a combination of engineers and business people, and now she’s probably working on 25 use cases. of which she plans to put three into production. [soon],” she said.

Sharon Mandel, Juniper Networks CIO, said her company is participating in an early pilot with Microsoft for Copilot for Office 365, and anecdotally, some people like Copilot and others are less impressed. says they’ve heard mixed feedback. Measuring productivity gains remains a challenge, he said, even though Microsoft has started offering dashboards that at least show levels of adoption and usage.

“The difficult thing about this is that we don’t have data on people’s productivity levels. So no matter what, we want to make sure that we have a good understanding of Microsoft’s dashboards that show how our users are using them. Until then, we will be using somewhat anecdotal information,” she said.

When companies hear about the potential power of generative AI, it’s no surprise that they want to learn more about it and leverage it to make their organizations run more efficiently, but at the same time, executives are becoming somewhat cautious. Of course. We recognize that these are still in their early stages and we need to learn through experimentation whether this is truly a revolutionary technology.

Source: techcrunch.com

Unity’s aim to provide developers with ethical and useful generative AI through Muse

Unity is joining other companies in providing users with generative AI tools, but ensuring that those tools (unlike some) are built on a foundation that is not based on theft. I have been careful to check. Muse, a new suite of AI-powered tools, starts with texture and sprite generation and gradually moves into animation and coding as it matures.

The company announced these features at the Unite conference in San Francisco, along with Unity 6, the next big version of its cloud-based platform and its engine. After a turbulent few months that saw major product plans completely scrapped and a CEO ousted, you’re probably looking to get back to business as usual if possible.

Unity has traditionally positioned itself as a champion for small developers who lack the resources to adopt broader development platforms like rival Unreal. Therefore, the use of AI tools can be considered a useful addition for a developer who cannot afford to spend days creating, for example, 32 slightly different wooden wall textures in high resolution. can.

There are many tools out there to help you create and modify assets like this, but it’s often desirable to be able to say “make something more like it” without leaving your main development environment. The simpler your workflow, the more you can do without worrying about details like formatting or siled resources.

AI assets are also often used in prototyping, where things like artifacts and slightly wonky quality (which these days are common regardless of model) don’t really matter. However, illustrating your gameplay concept with original, well-made art rather than stock sprites or free sample 3D models can make the difference in communicating your vision to publishers and investors.

Examples of sprites and textures generated by Unity’s Muse.

Another new AI feature, Sentis, is a little harder to understand. “It enables developers to bring complex AI data models into the Unity runtime to create new gameplay experiences and features,” Unity’s press release states. So it’s kind of his BYO model, with some features built in, and it’s currently in open beta.

AI for animation and movement is in development and will be added next year. These highly specialized scripting and design processes can greatly benefit from generative first drafts or multiplicative helpers.

Image credits: unity

The Unity team emphasized that a big part of this release is to ensure that these tools are not overshadowed by future IP infringement lawsuits. Image generators like Stable Diffusion are fun to play with, but they’re built using assets from artists who never agreed to have their work taken and regurgitated.

“To provide usable output that is safe, responsible, and respectful of the copyrights of other creators, we challenged ourselves to innovate the training techniques for the AI ​​models that power Muse’s sprite and texture generation.” says a blog post on responsible AI. Techniques associated with presentations.

The company said it used a completely custom model trained with images owned or licensed by Unity. However, they essentially used stable diffusion to generate a larger synthetic dataset from the small, carefully selected datasets they had assembled.

Image credits: unity

For example, this wood wall texture may be rendered with several variations and color types using a stable diffusion model, but no new content will be added. At least that’s how it’s described to work. But as a result, new datasets are not only based on responsibly sourced data, but also one step removed from it, making it less likely that a particular artist or style will be duplicated.

Although this approach is more secure, Unity admitted that the quality of the initial models it was providing was reduced. However, as mentioned above, the actual quality of the generated assets is not necessarily important.

Unity Muse costs $30 per month as a standalone product. We’re sure you’ll soon hear from the community about whether this product is worth its price.

Source: techcrunch.com

Civitai, a generative AI content marketplace with millions of users, receives backing from Andreessen Horowitz

A community that has a large following is stable diffusion, particularly among those experimenting with new AI technologies and creating their own models. Now, there is a platform called Chibitai, a startup that allows members to share their AI image models and content with other enthusiasts. The name “Chibitai” is a word play on “Civitas,” meaning community. Chibitai CEO Justin Maier recognized the need for such a platform after working on web development projects at Microsoft. He saw the potential for creating AI-generated images and posted them on platforms like Reddit and Discord, but felt the need for a centralized community to share and discover AI image models.

Chibitai became the go-to place for sharing AI models and images in 2023 and has since grown to over 3 million registered users and receives about 12-13 million unique visitors every month. The company has even raised $5.1 million in funding from Andreessen Horowitz (a16z) in June 2023.

The platform allows users to upload images to create their own AI image models, and with each image generated, metadata is provided that includes details such as prompts and resources used. However, concerns have been raised about artists’ work being used without consent to train AI models, which Chibitai is working to address by allowing artists to flag resources they believe are using their work. Additionally, there are also concerns about non-consensual pornographic AI images being shared on the platform, which the company is working to address.

In the future, Chibitai aims to expand to other modalities beyond AI image models and to create a consumer-facing mobile app that will act as a repository for AI images. The company also plans to enable users to monetize their creations, but for now, everything on the site is free to use.

Source: techcrunch.com