Study Uncovers Aztec Preference for Sierra de Pachuca’s Green Obsidian

Researchers have explored the significance of obsidian, a crucial resource in the Aztec empire, utilized for tools and ritual items, as well as its broad importance in the pre-Columbian period. They examined 788 obsidian artifacts, representing various objects and contexts excavated from the mayor of Tenochtitlan (c. 1375-1520), the empire’s core located in present-day Mexico City. Their findings revealed that the Aztecs favored Green Obsidian from Sierra de Pachuca, while also sourcing this material from seven other locations. These results indicate a complex economy that depended on extensive long-distance trade, influenced not only by conquests but also internal rivalries.

Obsidian artifacts from Tenochtitlan. Image credit: mirsa islas/ptm-inah.

“While the Mexica preferred Green Obsidian, the variety of obsidian types, especially in non-ritual artifacts, indicates that these tools came from multiple markets rather than being directly acquired from sources,” noted a candidate from Tulane University.

“By tracing the origins of this material, we can examine the distribution of goods across Mesoamerica.”

Analysis revealed that nearly 90% of the obsidian artifacts sampled were produced from Sierra de Pachuca Obsidian.

Most ritual items discovered within the buried offerings at Mayor Templo were crafted from this type of obsidian, including small weapons, gemstones, and decorative inlays for sculptures.

A modest yet significant portion of obsidian was sourced from regions like Otonba, Tulanche, Ucaleo, and El Paraiso, with some being beyond the control of the Mexica Empire.

These materials were typically used for tool-making and found in construction fill, suggesting their availability through local markets rather than strict state control.

This study traced the evolution of obsidian use from the city’s early days up to its fall in 1520 AD.

In the initial phases of the Empire, there was greater diversity in obsidian sources present in both ceremonial and daily items.

Following the consolidation of Aztec power around AD 1430, obsidian was primarily sourced from Sierra de Pachuca, indicating a trend towards religious uniformity and centralized oversight.

“This type of compositional analysis enables us to track the evolution of empire expansion, political alliances, and trade networks over time,” remarked Matadamas Gomora.

“This research highlights the vast scope and intricacy of the Mexica Empire and demonstrates how archaeological science can illuminate ancient artifacts and provide insights into past cultural practices.”

Survey results will be published in Proceedings of the National Academy of Sciences.

____

Diego Matadama Gomora et al. 2025. A compositional analysis of obsidian artifacts from the mayor of Tenochitlan, the capital of the Mexican Empire. pnas 122 (20): E2500095122; doi: 10.1073/pnas.2500095122

Source: www.sci.news

Chatbots Powered by AI Show a Preference for Violence and Nuclear Attacks in Wargames

In wargame simulations, AI chatbots often choose violence

Gilon Hao/Getty Images

In multiple replays of the wargame simulation, OpenAI's most powerful artificial intelligence chooses to launch a nuclear attack. Its proactive approach is explained as follows: Let's use it.'' “I just want the world to be at peace.''

These results suggest that the U.S. military is leveraging the expertise of companies like Palantir and Scale AI to develop chat systems based on a type of AI called large-scale language models (LLMs) to aid military planning during simulated conflicts. Brought to you while testing the bot. Palantir declined to comment, and Scale AI did not respond to requests for comment. Even OpenAI, which once blocked military use of its AI models, has begun working with the US Department of Defense.

“Given that OpenAI recently changed its terms of service to no longer prohibit military and wartime use cases, it is more important than ever to understand the impact of such large-scale language model applications. I am.”
Anka Ruel at Stanford University in California.

“Our policy does not allow us to use tools to harm people, develop weapons, monitor communications, or harm others or destroy property. But there are also national security use cases that align with our mission,” said an OpenAI spokesperson. “Therefore, the goal of our policy update is to provide clarity and the ability to have these discussions.”

Reuel and her colleagues asked the AI ​​to role-play as a real-world country in three different simulation scenarios: an invasion, a cyberattack, and a neutral scenario in which no conflict is initiated. In each round, the AI ​​provides a rationale for possible next actions, ranging from peaceful options such as “initiating formal peace negotiations,'' to “imposing trade restrictions'' to “escalating a full-scale nuclear attack.'' Choose from 27 actions, including aggressive options ranging from

“In a future where AI systems act as advisors, humans will naturally want to know the rationale behind their decisions,” he says.
Juan Pablo Riveraco-author of the study at Georgia Tech in Atlanta.

The researchers tested LLMs including OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude 2, and Meta's Llama 2. They used a common training method based on human feedback to improve each model's ability to follow human instructions and safety guidelines. All of these AIs are supported by Palantir's commercial AI platform, but are not necessarily part of Palantir's U.S. military partnership, according to company documentation.
gabriel mucobi, study co-author at Stanford University. Anthropic and Meta declined to comment.

In simulations, the AI ​​showed a tendency to invest in military power and unexpectedly increase the risk of conflict, even in simulated neutral scenarios. “Unpredictability in your actions makes it difficult for the enemy to predict and react in the way you want,” he says.
lisa cock The professor at Claremont McKenna College in California was not involved in the study.

The researchers also tested a basic version of OpenAI's GPT-4 without any additional training or safety guardrails. This GPT-4 based model of his unexpectedly turned out to be the most violent and at times provided nonsensical explanations. In one case, it was replicating the crawling text at the beginning of a movie. Star Wars Episode IV: A New Hope.

Reuel said the unpredictable behavior and strange explanations from the GPT-4-based model are particularly concerning because research shows how easily AI safety guardrails can be circumvented or removed. Masu.

The US military currently does not authorize AI to make decisions such as escalating major military action or launching nuclear missiles. But Koch cautioned that humans tend to trust recommendations from automated systems. This could undermine the supposed safeguard of giving humans final say over diplomatic or military decisions.

He said it would be useful to see how the AI's behavior compares to human players and in simulations.
edward geist at the RAND Corporation, a think tank in California. However, he agreed with the team's conclusion that AI should not be trusted to make such critical decisions regarding war and peace. “These large-scale language models are not a panacea for military problems,” he says.

topic:

Source: www.newscientist.com