Reports indicate that an artificial intelligence tool hosted by Amazon to enhance recruitment for the UK Ministry of Defense is potentially exposing defense workers to public identification risks. This information comes from a government evaluation.
The data utilized by automated systems in tailoring defense job advertisements to attract diverse candidates through inclusive language includes details like service member names, roles, and emails, and is stored by Amazon in the United States. A government document released for the first time today indicates that there is a risk of data breaches that could lead to the identification of defense personnel.
Although the risk has been classified as “low,” the Defense Department assured that there are “strong safeguards” in place by suppliers Textio, Amazon Web Services, and threat detection service Amazon GuardDuty.
The government acknowledges several risks associated with the use of AI tools in the public sector, as highlighted in a series of documents released to enhance transparency around algorithm use in central governments.
Ministers are advocating for the use of AI to enhance the UK’s economic productivity and deliver better public services. Safety measures are emphasized to mitigate risks and ensure resilience.
The UK government is collaborating with Google and Meta to pilot AI in public services. Microsoft is also offering its AI-powered Copilot system to civil servants, aligning with the government’s ambition to adopt a more startup-oriented mindset.
Some of the identified risks and benefits of current central government AI applications include:
-
Potential generation of inappropriate lesson material using a Lesson planning tool powered by AI, assisting teachers in customizing lesson plans efficiently.
-
Introduction of a chatbot to address queries concerning child welfare in family court, providing round-the-clock information and reducing wait times.
-
Utilization of a policy engine by the Ministry of Finance to model tax and benefit changes accurately.
-
Potential negative impact on human decision-making caused by excessive reliance on AI users in food hygiene inspections, leading to inconsistent scoring of establishments.
These disclosures will be documented in the expanded Algorithm Transparency Register, detailing information about 23 central government algorithms. Some algorithms with bias indications, like those in the Department for Work and Pensions welfare system, are yet to be recorded.
…
Source: www.theguardian.com