On Robot Rights

On Robot Rights

On Robot Rights 990 557 Ayush Prakash

Does AI need rights? Which ones from which companies in which countries fulfilling which tasks/activities deserve the humanistic feature of a “right?” Looking at the progression of artificial intelligence, it’s clear that a point may come — indeed, it may already have passed — where wider society must grant an AI basic “human” rights. 

You may have noticed my use of quotations to symbolize these loose terminologies; this will be fixed in this short but expounding blog post. My goal is to figure out how, when, and why AI will achieve basic human rights. 

Machine rights, to condense both AI and robots into a clean term, have to do with machines that we ascribe sentience or consciousness to. The mere act of ascribing these respective animalistic and humanistic terms to machines is an episode in itself, but to break it down, the initiation of sentience/consciousness, or the actuality of both, will be enough to end the debate and start the documentation of machine rights. 

By the end of this post, you can make the decision, and feel free to leave a comment regarding machine rights. 

Human Rights

What are rights? A philosophical and politically-charged question, when we talk about human rights, what do we mean?

At its most basic form, human rights represents a framework for all humans to act, interact, and treat all humans. Things such as freedom, equality, mortality, and recognition without any distinction like race, sex, colour, language, religion, political stance, or other status are integral to this universal declaration.

This outline for humans provides a basis for understanding if machines need rights. To most, these inanimate and soulless, hollow and meaningless metal contraptions should not and cannot be granted a “right” because machines have no conception of themselves; in other words, no agency. While this is true, I must be clear about the act of giving rights to machines. 

As it stands, machines possess no agency. Algorithms that act according to a recipe of instructions don’t necessarily tick the boxes when suggesting we provide them with rights. Ditto that for robots. 

For example, the most famous robot which you may find in a corner of your household, the Roomba, is not going to be granted a right. Why? Because this would not change the fundamental nature of the way it acts or survives; robots would not act any different toward it, ditto that for humans. The same ca be said of social media algorithms. Any form of AI and robot, in 2022, has no agency and responds to external input. 

Let’s say a bill was passed that granted the Roomba freedom of speech. Okay? Nothing would change about the way it acts or cleans your pet’s hair off the floor. 

Again, I must reemphasize my point: machine rights are not granting rights to run-of-the-mill digital or physical. This is redundant and useless, a waste of time and money. If the machine is not affected by the way humans act or react towards it, then giving it a right does nothing. So what do I mean, and what am I getting at?

The Imitation Game

Let’s first start with imitation. Surely, if we are cognizant that we are building machines capable of imitating consciousness or sentience, that kills the need for granting rights. However, it is more complicated than that. 

Giving robots the ability to feel pain, to love, to be attracted to something or someone, are all under the realm of sentience. If you’re asking why, the answer is simple: We must program these emotions inside of machines in order for them to function much better. 

If a robot feels pain, it can better relate to humans and aid us in more personal ways. Remember the episode about killer robots where we talked about programming robot soldiers with a sense of their own vulnerability to prevent them from taking lethal action against humans? The same concept applies here. 

But instead of vulnerability, it would be love, pain, attraction, disgust, and other basic emotions that we commonly find in animals. The acknowledgement that machines experience this, even if its an imitational experience, can be enough to ascribe sentience to these machines. 

Once this engineering feat is achieved, we will have to make a choice: do we grant machines a form of animal rights, or create a separate category for them? I believe we will create a separate category for them, because machines play a different role in our lives that animals cannot and will not occupy. 

Machines perform tasks for us, they teach us, they talk to us, they know us to an extent which is seemingly incomprehensible to us. If you don’t believe me that machines know us better than we know ourselves, refer to the algorithm of TikTok, Youtube, Instagram, giving you personalized content that only you would find interesting. For this reason, we must create a new branch of rights called “Machine Rights” or “Robot Rights.”

Ascribing Morality

Moving on from sentience, we arrive at consciousness. I am not just talking about the big breakthrough of achieving AGI out of nothing, however. I am talking about uploading humans into cyberspace. 

Obviously this is futuristic, and won’t be (with current technology) feasible for many decades. Still, the act of uploading carbon-based humans into silicon forces us to contend with digital consciousness. It is here where machine rights enter the realm of artificial consciousness. 

Digital humans would have to be granted rights, obviously. But they wouldn’t necessarily be plain human rights. There would need to be more to it, along with restrictions and regulations — again, this would fall under the subcategory of “machine rights.” Nick Bostrom, in his brilliant paper The Ethics of Artificial Intelligence, outlines the ethical issues of providing moral status to machines. 

He states that sentience, which we already talked about, is important for attributing moral status, but he also mentions another term, sapience, which is associated with higher intelligence like self-awareness and rationality. These two attributes would, according to the paper, be enough to ascribe a moral status, and subsequent rights, to a machine. 

To cap off this point, Nick Bostrom brings up the Principle of Substrate Non-Discrimination, stating if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status. In other words, if two entities are equally conscious, but one is carbon-based and one is silicon-based, the status of the two entities is identical, and should be treated and documented as such. 

I think this is a final nail in the coffin of machine rights, that as soon as sentience or sapience is achieved in a machine, even if its imitated, the machine must be granted moral status and treated equally to other humans. 

It may sound science-fiction and otherworldly to some, but to those who admire the pursuit of artificial intelligence, this is an inevitability.