Artificial intelligence (AI) and algorithms have and are being used to exacerbate radicalization, strengthen polarization and spread racism and political instability, according to academics at Lancaster University. .
Jo Barton, professor of international security at Lancaster University, argues that AI and algorithms are not just tools used by national security agencies to thwart malicious online activity. He suggests that they can foster polarization, radicalism and political violence, thereby posing a threat to national security itself.
In addition to this, he says the securitization process – which presents technology as an existential threat – has helped shape how AI is designed, how it is used, and the harmful outcomes it produces.
AI in securitization and its social impact
Professor Barton’s paper was recently published in Elsevier’s High Impact magazine. Technology in society journal.
“AI is often framed as a tool to counter violent extremism,” Professor Barton says. “This is the other side of the argument.”
This paper examines how AI has been securitized throughout its history and in media and popular culture depictions, creating polarizing and radicalizing ideas that have contributed to political violence. We explore this by examining contemporary examples of AI that have had an impact.
AI in War and Cybersecurity
This article explores how the classic film series The Terminator, which depicts a holocaust caused by an “advanced and malignant” artificial intelligence, has changed the public’s perception of artificial intelligence and how machine consciousness could have catastrophic consequences for humanity. He cites it as being more helpful than anything else in instilling fear. Humanity – In this case nuclear war and a deliberate attempt to wipe out humanity. seed.
“This distrust of machines, the fear associated with them, and their association with biological, nuclear, and genetic threats to humanity has led to a desire on the part of governments and national security agencies to influence the development of technology. connected, we can reduce risks and (in some cases) exploit its positive potential,” writes Professor Barton.
Professor Barton said advanced drone roles, such as those used in the Ukraine war, were now capable of full autonomy, including functions such as target identification and recognition.
There have been widespread and influential campaign discussions, including at the United Nations, to ban “killer robots” and ensure that humans are always informed about life-or-death decisions. According to him, the infiltration of armed drones continues to advance rapidly.
In cyber security (computer and computer network security), AI is used in major ways, the most prevalent areas being (dis)information and online psychological warfare.
The Putin regime’s actions against the 2016 US election process and the subsequent Cambridge Analytica scandal showed that AI combined with big data (including social media) polarizes, encourages extremist beliefs, and manipulates identities. It showed the possibility of producing political effects centered on group. This demonstrated the power and potential of AI to divide society.
The social impact of AI during the pandemic
And during the pandemic, AI was seen as a positive in tracking and tracing. virus But it also led to privacy and human rights concerns.
This article examines AI technology itself, arguing that problems exist in the design of AI, the data it relies on, how it is used, and its outcomes and impacts.
The paper concludes with a strong message for researchers in cybersecurity and international relations.
“AI certainly has the ability to transform society in positive ways, but it also poses risks that need to be better understood and managed,” said John C., an expert on cyber conflict and emerging technologies and a university professor. Professor Barton, who is part of the Security and Protection Science Initiative, writes:
“It is clear that understanding the dichotomous impact of technology is critical at every stage of its development and use.
“Cybersecurity and international relations scholars have an opportunity to incorporate these elements into new AI research agendas and avoid treating AI as a politically neutral technology.
“In other words, concerns about the security of AI systems and their societal impact should not be overridden by how they are used in international and geopolitical struggles.”
Reference: “Algorithmic extremism? The Securitization of Artificial Intelligence (AI) and its Impact on Radicalism, Polarization, and Political Violence,” by Joe Barton, September 14, 2023. Technology in society.
DOI: 10.1016/j.techsoc.2023.102262
Source: scitechdaily.com