Carers in desperate situations throughout the UK require all the assistance they can receive. However, researchers argue that the AI revolution in social care needs a strong ethical foundation and should not involve the utilization of unregulated AI bots.
A preliminary study conducted by researchers at the University of Oxford revealed that some care providers are utilizing generative AI chatbots like ChatGPT and Bard to develop care plans for their recipients.
Dr. Caroline Green, an early research fellow at Oxford University’s Institute of AI Ethics, highlighted the potential risk to patient confidentiality posed by this practice. She mentioned that personal data fed to generative AI chatbots is used to train language models, raising concerns about data exposure.
Dr. Green further expressed that caregivers acting on inaccurate or biased information from AI-generated care plans could inadvertently cause harm. Despite the risks, AI offers benefits such as streamlining administrative tasks and allowing for more frequent care plan updates.
Technologies based on large-scale language models are already making their way into healthcare and care settings. PainCheck, for instance, utilizes AI-trained facial recognition to identify signs of pain in non-verbal individuals. Other innovations like OxeHealth’s OxeVision assist in monitoring patient well-being.
Various projects are in development, including Sentai, a care monitoring system for individuals without caregivers, and a device from the Bristol Robotics Institute to enhance safety for people with memory loss.
Concerns exist within the creative industries about AI potentially replacing human workers, while the social care sector faces a shortage of workers. The utilization of AI in social care presents challenges that need to be addressed.
Lionel Tarasenko, professor of engineering at Oxford University Leuven, emphasized the importance of upskilling individuals in social care to adapt to AI technologies. He shared a personal experience of caring for a loved one with dementia and highlighted the potential benefits of AI tools in enhancing caregiving.
Co-host Mark Topps expressed concerns from social care workers about unintentionally violating regulations and risking disqualification by using AI technology. Regulators are urged to provide guidance to ensure responsible AI use in social care.
Efforts are underway to develop guidelines for responsible AI use in social care, with collaboration from various organizations in the sector. The aim is to establish enforceable guidelines defining responsible AI use in social care.
Source: www.theguardian.com