top of page

Should AI Have Rights? Is It Really Upto Us? | Bruhad Dave

Bruhad Dave

Definitions are difficult. Often a very good test of whether one understands something is to ascertain whether one can define that something. Fact: we have not been able to define humans and human minds in any conclusive and exhaustive manner. Some might suggest that this fact is what defines us as humans. The argument could then be made that our inability to define ourselves should be included in our definition of ourselves. It goes round and round. This is exactly why so much of philosophy is incomprehensible until you are thinking about something else entirely. Point is, definitions are difficult.

A simple Google search brings up the following definition of artificial intelligence:

artificial intelligence


noun: artificial intelligence; noun: AI

  1. the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

However, if you were to look at a result from Merriam-Webster, you get this:

Definition of Artificial Intelligence

1:a branch of computer science dealing with the simulation of intelligent behavior in computers

2:the capability of a machine to imitate intelligent human behavior

Indeed, if you were to enter ‘ai definition’ in your search bar, you’d get this:

ai ˈɑːi/


noun: ai; plural noun: ais

1      the three-toed sloth.

Yes, exactly.

The thing with AI is that we know of many examples. A lot of them are from the time when computers took up warehouses. The people who thought about them imagined that a machine that could do math could logically also progress to near- or even superhuman intelligence. True, we have unnervingly accurate autocorrect, Google Translate is not poetic but it works, facial recognition software has relatively long since made it onto smartphones, Siri, Alexa and Google Assistant are the subject of impressive adverts that prove more accurate than we could have guessed, and self-driving cars do not raise too many eyebrows.

Presumably, Marvin the Paranoid Android was only so morose so much of the time because he could do much more than the aforementioned things, and there really wasn’t a lot of work that could be said to have been made for the metal fellow. It’s fortunate that all he really did was sulk and sneer at the mental faculties of live talking mattresses in marshes that one time. Rather than take over the world. Oh, wait! So many instances where AI does just that.

At this point, it becomes useful to classify AI: generalised AI is this software, either in a computer sans limbs or a robot, which can pass for human and not frequently spout variations of ‘I’m sorry, I do not understand. Would you like me to search the web for that?’ Turns out a lot of humans do that but that’s another matter. The other kind of AI, we see a lot even today. A specialised AI will do one thing and one thing only, but it will do that thing so well that it will blow your socks off and compel you to murmur in awe. One trick pony with rocket boosters.

In short, it will be generalised AI that may or may not wipe out humankind.

But then, it is to be noted that the perfect generalised AI, one which can beat a Turing test is the goal at the moment, the epitome, the one we’ve all been waiting for, sci-fi incarnate. And people are worried. What if that last bit is a tad too true? What if the AI decides, using hard irrefutable logic, that humans are pointless? Cue mass hysteria. Go on. Scream. And the people who are worried about this sort of thing scream even louder; in outrage this time; when other groups of people talk about giving rights to this potentially tyrannical series of algorithms.

They’re not the only ones: people who see humans as exceptional among beings hold that these softwares, these seemingly random strings of code can never be human and thus do not deserve the same rights humans do. They say that even an algorithm that can keep rewriting itself and improving itself will always be no more than a programmable calculator: a machine.

Rights dictate two things: that the entity in question be able to bear responsibility for its own actions and be commended or reprimanded as appropriate; and that the said entity be protected from agents that seek to deprive them of this ability. This is largely it. Really. You should be able to say whatever the hell you want, and depending on what you said, you should receive the corresponding awards or punishment. And some random entity shouldn’t be able to shut you up.

So, let’s take some AI which we have deemed to possess near human intelligence, possibly even the same as humans. Okay. This AI should not be granted the right to free speech because it’s essentially a machine, or lives in one. Detractors say that the human mind is a very complicated machine. Subscribers to human exceptionalism say that this is bull and that humans are in fact alive (slow clap). Machines aren’t alive. (Life is another thing that we have only partially been able to define in any satisfactory manner. Just saying.) This means that AI can be damaged. Not harmed, damaged.

The obvious argument is, what if we coded it into the AI to be moral, and to feel? They would no longer be simply intelligent but possibly more human than artificial. But no, this is an extrinsic factor. The act of programming these things into AI is not the same as the AI already having them, as we, moral animals, do since birth. And another thing, say the human-exceptionalists. Robots can never feel pain. We can. They cite all these things as reasons why we have animal rights; which is a good thing; and we shouldn’t grant rights to AI.

The flip side to this coin is that if our little AI attains sentience, or consciousness, then we will have no ethical choice other than to give it rights. Now, sentience is vaguely explained as the ability to be aware of, and reflect on subjective experience. Consciousness would be the state of being self aware, and aware that you have a mind. If a robot could tell you that according to its algorithms, your pants don’t match your shirt, but it personally feel you pull off the outfit very well, you probably would not say to yourself ‘It’s just a computer program, what would it know.’ You would use theory of mind to assume a mind for this nifty robotic shopping assistant and thank it sincerely at the billing counter later on.

It’s true, we empathise with even the dumbest of robots. What is to say we won’t feel completely at home if along came a robot that was as intelligent as us?

Of course, a bit of thought leads to far too many questions.

If there are multiple types of intelligences, why not multiple types of consciousness? If all we are talking about is sentient robots, are we ourselves anything more than our minds?

If you go far enough, you would even be led to ponder the nature of death, and whether it really is an inevitability, given how Johnny Depp uploaded himself onto a computer in Transcendence?

The question we do need to ask ourselves, frequently and with sufficient feeling and proper pronunciation, is what exactly are we talking about? We must periodically define that subject and periodically update the definition we use, because that changes a lot of the outcomes of the conversation. It is generally agreed that we will not need to draw up AI rights in all their detail until quite some time yet. But then, this seems like we’re talking about sentient, conscious entities, entities that is, which think and feel, over and above being capable of complicated calculus.

And if that is true, then it really isn’t our place to say whether AI entities deserve rights similar to those we have, because then, it won’t be humans lobbying for said rights while the AI sat in a lab somewhere. When it is really time to decide if they get rights, these human-equivalent entities, (if they are that) will insist on having a say and we, as moral animals will deem it unethical to not let them.

Until then, we really cannot say. And that, as the human condition would have it, is that.


Bruhad Dave is a third year student of Zoology (B.Sc.) at St. Xavier’s College, Ahmedabad. He can be reached at

If you like our content and would like to help us sustain our standard, make a donation and help us in a big way!
bottom of page