In late 2023, Israel aimed to assassinate Ibrahim Biari, the top Hamas commander in the Northern Gaza Strip, who supported the planning of the October 7th massacre. However, Israeli intelligence could not find Mr. Biari, who was believed to be hidden in a network of tunnels under Gaza.
Israeli officers turned to new military technology infused with artificial intelligence, Israeli and American officials recounted the incident. This technology had been developed a decade ago but had not been utilized in combat. The discovery of Biari prompted new motivations to enhance the tools, leading the engineers of Israeli Unit 8200, akin to the national security agency, to swiftly integrate AI into their technology.
Following this, Israel intercepted Mr. Biari’s communication and tested the AI audio tool. Utilizing that information, Israel ordered an airstrike targeting the area on October 31, 2023, resulting in the killing of Biari. According to Airwars, a London-based conflict monitor, more than 125 civilians were also killed in the attack.
Audio tools were just one example of how Israel leveraged the conflict in Gaza to quickly test and deploy AI-backed military technology.
In the past 18 months, Israel combined AI and facial recognition software to match partially obscured or injured faces to their actual identity, relying on AI to compile potential airstrike targets, developing Arabic AI models, and creating chatbots for scanning and analyzing Arabic language data from text messages, social media posts, and other sources.
Many of these initiatives were collaborations between enlisted soldiers from Unit 8200 and security soldiers from high-tech companies like Google, Microsoft, and Meta. Unit 8200 acted as a hub of innovation known as the “studio,” linking experts with AI projects.
Despite Israel’s advancements in AI technology, deploying such tools could result in false identifications and arrests, as well as civilian casualties, as Israeli and American officials have pointed out. Some officials have expressed concerns about the ethical implications of AI tools, which may increase surveillance and lead to further civilian harm.
European and American defense officials have glimpsed how such technologies will be utilized in future conflicts, contrasting with countries less involved than Israel in experimenting with real-time AI tools in battles.
Hadas Rover, director of the Institute of Responsible AI at the Horon Institute of Technology in Israel and former senior director of the Israeli National Security Council, stated, “It has led to groundbreaking techniques on the battlefield and valuable benefits in combat.”
However, Rover emphasized the serious ethical questions arising from technology and stressed the importance of checks and balances on AI, with humans making final decisions.
An Israeli military spokesperson refrained from commenting on specific technologies due to their classified nature. Still, she affirmed Israel’s commitment to the legal and responsible use of data technology tools, mentioning an ongoing investigation into the strike against Biari.
Meta and Microsoft chose not to comment, while Google clarified that employees fulfilling their reserve duties worldwide are not affiliated with Google.
Israel has previously utilized conflicts in Gaza and Lebanon to experiment and advance military technological tools, including drones, phone hacking tools, and Iron Dome defense systems.
… (continued)
Source: www.nytimes.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.