Categories
AI ethics

Comment: Will robots have rights in the future?

If we get to create robots that are also capable of feeling pain then that will be somewhere else that we have to push the circle of moral concern backwards because I certainly think we would have to include them in our moral concern once we’ve actually created beings with capacities, desires, wants, enjoyments, miseries that are similar to ours.

Singer, P. (2019). Will robots have rights in the future? Big Think.

Peter Singer makes a compelling argument that sentient robots (this is assuming we get to the stage where we develop Artificial General Intelligence) ought to be treated in the same way that we treat each other, since they would exhibit the same capacity for pain, desire, joy, etc. as human beings.

I’m interested in what happens when we push the moral boundary further though, since there’s no reason to think that human beings represent any kind of ceiling on what’s possible when it comes to what can be felt and experienced. Will artificially created sentient beings deserve “more” or different rights than human beings, based on their increased capacity for experiencing a wider range of feelings than what is available to us? Will it get to the point where we are to AI-based systems what pigs are to us?

By Michael Rowe

I'm a lecturer in the Department of Physiotherapy at the University of the Western Cape in Cape Town, South Africa. I'm interested in technology, education and healthcare and look for places where these things meet.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.