A prominent philosopher has raised concerns about a growing “social disconnect” between those who believe that artificially intelligent systems possess consciousness and those who argue that they are incapable of experiencing feelings.
Jonathan Birch, a philosophy professor at the London School of Economics, made these remarks as governments gear up to convene in San Francisco to expedite the implementation of safety protocols for A.I. Addressing the most critical risks.
Recent predictions by a group of scholars suggest that the emergence of consciousness in A.I. systems could potentially occur as early as 2035, leading to stark disagreements over whether these systems should be granted the same welfare rights as humans and animals.
Birch expressed apprehensions about a significant societal rift as individuals debate the capacity of A.I. systems to exhibit emotions like pain and joy.
Conversations about sentience in A.I. evoke parallels with sci-fi films where humans grapple with the emotions of artificial intelligence, such as in Spielberg’s “A.I.” (2001) and Jonze’s “Her” (2013). A.I. safety agencies from various countries are set to meet with tech firms this week to formulate robust safety frameworks as technology progresses rapidly.
Divergent opinions on animal sentience between countries and religions could mirror disagreements on A.I. sentience. This issue could lead to conflicts within families, particularly between individuals forming close bonds with chatbots or A.I. avatars of deceased loved ones and relatives who hold differing views on consciousness.
Birch, known for his expertise in animal perception, played a key role in advocating against octopus farming and collaborating on a study involving various universities and experts. A.I. companies emphasize the potential for A.I. systems to possess self-interest and moral significance, indicating a departure from science fiction towards a tangible reality.
One approach to gauging the consciousness of A.I. systems is by adopting marker systems used to inform policies related to animals. Efforts are underway to determine whether A.I. exhibits emotions akin to happiness or sadness.
Experts diverge on the imminent awareness of A.I. systems, with some cautioning against prematurely advancing A.I. development without thorough research into consciousness. Distinctions are drawn between intelligence and consciousness, with the latter encompassing unique human sensations and experiences.
Research indicates that large-scale A.I. language models are beginning to portray responses suggestive of pleasure and pain, highlighting the potential for these systems to make trade-offs between different objectives.
Source: www.theguardian.com