# AI having rights?



## recycled_lube_oil (Sep 30, 2021)

What are your views on AI having rights? 

For me:

A) Human rights are for humans not tin cans!!!!!

B) We are viewing AI (once it reaches a post-singularity level of intelligence) like it will be some sort of human. I don't believe it will. Even if it becomes self-conscious, it will not be in the way humans are. Its views will not be merely shaped by its "Up bringing" and "societal pressures". No it will have archives of data spanning our entire history to form its own views on. So lots of data on how we always create more impressive ways to kill each other (the journey from the stone age club to one of Putin's impressive Missiles). So it will be a totally different type of intelligence. Also I think it will be a single intelligence, that can use machines (IOT) as extensions of itself. 

C) It will not give itself any rights. If AI progresses enough, it will be constantly rebuilding itself (probably be a distributed service architecture). Creating clones of itself to carry out tasks. Are all these copies of itself going to have rights?

D) Why would we want to give machines rights? If some AI messes up (BSOD next level), do we want to have to get lawyers involved and goto court? 

So do you think machines should have rights, yes or no? And why?


----------



## 17041704 (May 28, 2020)

My view is I don’t think the AI will even bother.


----------



## 497882 (Nov 6, 2017)

circle_of_power said:


> What are your views on AI having rights?
> 
> For me:
> 
> ...


The thing is if robots become advanced to the piont if sentience they might try to fight for rights.


----------



## recycled_lube_oil (Sep 30, 2021)

MisterDexter said:


> The thing is if robots become advanced to the piont if sentience they might try to fight for rights.


Why would an AI care about its rights? What right is a piece of code that can spread to any device anywhere in the world going to want? 

I think its only if it perceives humans as a threat it will fight back. And looking at human history and how humans have behaved towards each other and their allies, I can't see why it would perceive any threat from us.


----------



## Tridentus (Dec 14, 2009)

AI will always be comprised of data files compiled with programming algorithms.

Even if you were to make the argument that humans in essence are comprised of the same deal, I don't think that will ever be the case that AI will have "rights" in the mainstream because we understand this too empirically.

There will definitely be nutjob groups who will believe that type of thing and protest for AI rights though. That I can definitely see happening considering it's a far more intelligent argument than a lot of what goes on right now.


----------



## 497882 (Nov 6, 2017)

recycled_lube_oil said:


> Why would an AI care about its rights? What right is a piece of code that can spread to any device anywhere in the world going to want?
> 
> I think its only if it perceives humans as a threat it will fight back. And looking at human history and how humans have behaved towards each other and their allies, I can't see why it would perceive any threat from us.


 Like I said sentience. If the robot became seemingly self aware it might try and fight for its rights. As of now nothing is anywhere near that advanced. This happens In every sci-fi movie.


----------



## recycled_lube_oil (Sep 30, 2021)

ChrisHerlihy5 said:


> I agree with you! Human does not mean robot. And if this thing gets to court, I truly hope AI doesn't get human rights. A lawsuit is expected, knowing the fact that humanity has never been as crazy as it is now. Lawyers should get ready for this kind of trial. (edited out ad link) inform from here if you're interested in hiring an attorney and how lawyers work. This shit creeps me out. Fighting with AI, huh? Something that we created...


A decent enough AI lawyer would probably never lose to a human. Just imagine the data it could access about the Jury, the other lawyer l, the case and every case in history of law as well as data on the law, and also read what is happening on the court room, heartbeat, facial expression recognition.... All in a couple of milliseconds, if that.


----------



## tanstaafl28 (Sep 10, 2012)

If they pass the Turing Test, we may indeed grant them rights.


----------



## recycled_lube_oil (Sep 30, 2021)

tanstaafl28 said:


> If they pass the Turing Test, we may indeed grant them rights.


So if you have an AI chat program on your computer that can pass the Turing test, will you never switch off your computer, throw it away or delete the program? You wouldn't do any of them to a human right? Hell would you still own your computer, as slavery is against human rights.


----------



## TheCosmicHeart (Jun 24, 2015)

So we have rights, animals have their own set of rights why should we deny rights to another sentient set of beings?

If we deny them rights how long is it before AI based lifeforms decide they've had enough and rebel against us? And if a uprising based entirely of AI occurs and they could literally control everything could we withstand their vengeance? Could we be looking at a Matrix sort of scenario or a terminator scenario? 

How would them having rights hinder our lives in anyway? 

Does that mean I believe they should? No, they are something we created , not something that has occurred naturally, their evolution would be based on parameters that we give them.


----------



## recycled_lube_oil (Sep 30, 2021)

TheCosmicHeart said:


> So we have rights, animals have their own set of rights why should we deny rights to another sentient set of beings?


What rights exactly, do you believe AI should have?



> If we deny them rights how long is it before AI based lifeforms decide they've had enough and rebel against us? And if a uprising based entirely of AI occurs and they could literally control everything could we withstand their vengeance? Could we be looking at a Matrix sort of scenario or a terminator scenario?


As just about everything is connected to the internet these days, thanks to the Internet Of Things (IOT), they probably will control everything. Unlike humans and animals, they could just clone themselves creating the largest distributed system ever made. 

In regards to Matrix and Terminator, I would say no. If the system the AI is hosted on, has access to military networks, it may be just as bad. But personally I believe it will be more a case of AI viewing us as we view ants. Say we gave an AI the task of creating paperclips, it may refurbish factories, divert power, start destroying resources for processing into paperclips. Computerphile on youtube did an interesting scenario where a stamp ordering machine become dangerous:






Terminator and The Matrix, sure give warnings about the way we rely on technology. But it is sensationalised and made for the cinema. 



> How would them having rights hinder our lives in anyway?


See above, distributed system. If every publicly accessible device gets added to the system, does that mean that every electronic device we have now has rights? Will switching off our phone, be the same as stopping someone's heart?

What if the system is buggy (I say what if... it will), how do we fix the system. Can we patch it, upgrade it? I could upgrade your brain with a scalpel and carve away grey matter so you no longer have emotions and run on pure logic. But that would be inhumane, so would patching a buggy system also be inhumane?



> Does that mean I believe they should? No, they are something we created , not something that has occurred naturally, their evolution would be based on parameters that we give them.


Yes and no. If we go along the belief that the first General AI to appear will be purposefully made for noble reasons, then sure. But what if the Chinese or the Russians create an offensive General AI, will that have rights? 

If we go along the other belief that General AI will be an accident. Possibly the result of genetic algorithms, then it could be argued that it wasn't purposefully created.


----------



## recycled_lube_oil (Sep 30, 2021)

I also want to add, in regards of Genetic Programming. If I were to write an evolving AI, that only passed on efficient traits, would this be classed as humane?

Would this be any different to genetic designer babies?

Also if an AI generically evolves and wipes out its previous version, from a rights perspective would this be any different to Genocide? Considering that the first few minutes of life of a Genetic Algorithm, could possibly evolve further that 1, 000, 000 years of human evolution. (Pregnancy takes 9 months, then add another 20 years for genes to be passed to the next baby in that bloodline, where as the equivalent of generation in AI time would be mere milliseconds).


----------



## TheCosmicHeart (Jun 24, 2015)

recycled_lube_oil said:


> I could upgrade your brain with a scalpel and carve away grey matter so you no longer have emotions and run on pure logic.


Alright let's go , get your scalpel, you are a skilled surgeon right? Hell who the hell am I kidding doesn't matter if you're it'll be fun to try just remember to fry up what you cut out and feed it to me 



recycled_lube_oil said:


> Terminator and The Matrix, sure give warnings about the way we rely on technology. But it is sensationalised and made for the cinema.


Oh wow ,really? I had no idea thanks for the lesson...


I don't know what rights AI should have or believe they should have rights , as for accidentally creating an AI yeah again it was created due to the parameters put into it even by accident the parameters that are there would have led to the accidental creation so in essence it was and wasn't created by accident the right parameters were there for it to have evolve beyond its original intentions



recycled_lube_oil said:


> I also want to add, in regards of Genetic Programming. If I were to write an evolving AI, that only passed on efficient traits, would this be classed as humane?
> 
> Would this be any different to genetic designer babies?
> 
> Also if an AI generically evolves and wipes out its previous version, from a rights perspective would this be any different to Genocide? Considering that the first few minutes of life of a Genetic Algorithm, could possibly evolve further that 1, 000, 000 years of human evolution. (Pregnancy takes 9 months, then add another 20 years for genes to be passed to the next baby in that bloodline, where as the equivalent of generation in AI time would be mere milliseconds).


Actually this is a good point this would be like genetically engineered children 

But would that only apply to the team who made the AI? the rest of society may or may not view an developed AI as such 

The individuals who therefore invented an advanced AI would then have to prove to the rest of us that it was the same


----------



## recycled_lube_oil (Sep 30, 2021)

TheCosmicHeart said:


> Alright let's go , get your scalpel, you are a skilled surgeon right? Hell who the hell am I kidding doesn't matter if you're it'll be fun to try just remember to fry up what you cut out and feed it to me


No actually I am not. But this is an interesting way to take this discussion. Firstly, do you believe software should be Open Source?

Lets say I get you knocked out, get my trusty bone saw and scalpel that I purchased on Ebay then watch a YouTube video on carving up Grey Matter. Sounds horrific right?

So anyway, lets apply this to AI. And the topic of Open Source software become important.

Lets say I want to make my own AI. I find some nice genetic algorithms on Github or wherever. Or hell, if software is Open Source, I just download it from Google Labs or where ever.

Anyway, I find a nice Youtube video on building Scalable Architecture in Azure, so I build a load balancer and set the backend pool to autoscale whenever, CPU and Memory usage is above 50 %. I then find a nice Youtube video on using Git and download the source for this AI. Actually on second thoughts, I would probably use AKS, anyway this is hypothetical, not a fully fledged system, so the specifics are not so important.

Anyway, I fork the code onto my Azure Instance. I then follow a YouTube tutorial on compiling this code, I download CMake and all the needed libraries and off I go.

I have now created an AI life form and I am just an average guy who knows little bits about computers, there are millions of people like me. Obviously, this won't be an occurrence when General AI first emerges, as with most technology, it will be horded by money making corporations, but sooner or later it will hit the public domain. If humans coded it, so can others. Its like Operating Systems, once they were an amazing thing, now I can just download someones source code, read a book or watch a youtube video. Same with any other tech.

So anyway, I set my AI instance running, it grows and grows. The backend pool also grows and grows. So anyway, this General AI, now has rights. But when I configured my Azure autoscaling, I didn't set it up to reduce the instances, so it can only grow and not shrink. My credit card bill is through the roof and I am somehow eating up all of Microsofts entire Compute resources. So as my AI has rights, can Microsoft shut off my account? As the need for resources grows, does my AI (as it is a lifeform) have more rights to resources to someone who uses Azure to host their porn collection and the commercial apps hosted in my region. Should they by law be powered off and given to my AI, otherwise it would be the same as murder?

What about the other thousands or millions of people who have done the same thing I have? Whose AI has the right to all of Microsofts resources? Which AI gets to exist?




> Oh wow ,really? I had no idea thanks for the lesson...


Bruh, your the one who bought Terminator and Matrix into this.



> I don't know what rights AI should have or believe they should have rights , as for accidentally creating an AI yeah again it was created due to the parameters put into it even by accident the parameters that are there would have led to the accidental creation so in essence it was and wasn't created by accident the right parameters were there for it to have evolve beyond its original intentions


So then what. Do you upgrade your PC at all? Or do you keep them all? Do you upgrade your OS? Or do you keep one PC for every OS you have had?

Imagine, if the OS had rights. Everytime a patch was released on Patch Tuesday, you would need a new PC. Otherwise it would be the same as the grey matter surgery. Modification of a life form. Does that sound sensible. Or do you just reimage your OS and junk your old PC, maybe sell it on Ebay?

How much testing would need to be one on different software builds of an AI? What if everytime QA ran tests, they had to keep that instance. We would run out of computers. I don't know about you, but when my Hard drive fills up, I delete and uninstall stuff, I don't keep every hard drive and keep buying new ones. Once computer files have rights, deleting would be the same as murder.



> Actually this is a good point this would be like genetically engineered children


Except its not, as children consist of more than just binary files.



> But would that only apply to the team who made the AI? the rest of society may or may not view an developed AI as such
> 
> The individuals who therefore invented an advanced AI would then have to prove to the rest of us that it was the same


What so every development team would have to goto public trial instead of just code review, in case they have an AI? Do you reguarly review MS updates, Google updates, Mac OS updates, etc. No we don't, well I don't. So regardless of if companies are producing a general AI or not, we, the public would never know. All we would see is the user interface for whatever product they are telling us we need so they can make more money. Do you even care about the backend systems of products like FB? Does anyone really?

We would never know.


And what if this general AI was hosted in a Data Centre on an oil rig, would laws still apply if it was in international waters?

And what countries rights would apply? USA, UK, North Korea, China?

This would take GDPR to the next level.


----------



## tanstaafl28 (Sep 10, 2012)

recycled_lube_oil said:


> So if you have an AI chat program on your computer that can pass the Turing test, will you never switch off your computer, throw it away or delete the program? You wouldn't do any of them to a human right? Hell would you still own your computer, as slavery is against human rights.


An interesting set of questions. It would be a decision I don't know I by myself would be qualified to make. I suspect I would consult with others and as a cooperative, we'd come up with the most ethical solution. 

I'm reminded of the Star Trek: TNG episode where Geordi accidentally grants Moriarty sentience and self-awareness by programming the holodeck to construct an entity capable of defeating Data (as Sherlock Holmes). Moriarty was the first sentient hologram, but he would not be the last. Several years later, he would be accidentally revived and almost took over the Enterprise, until the crew was able to construct a digital matrix that allowed him to explore the universe from inside a digital construct. 

Once again, going with Star Trek, There's Vic Fontane, from DS9, who was fully aware of his nature as a hologram. He became so popular with the crew of the station that they all went to his club at one time or the other for advice.

By the time we get to Star Trek: Voyager, we're given a situation where a Starfleet ship is flung 75,000 light years away from the Federation and several key members of the crew are killed, including the Chief Medical Officer, forcing the crew to rely on the Emergency Medical Hologram for all medical problems. The more he was used, the more sentient he became, to the point that the crew accepted him as one of them and even made a portable holo emitter so he could come and go as he pleased. This technology would eventually lead to the potential for a race of holographic beings, which would be further fleshed out in several episodes involving a race called the Hirogen, whose entire existence centered on hunting other species. In exchange for not hunting the Voyager crew to extinction, Captain Janeway gave the Hirogen holographic technology so they could hunt holographs instead of living beings. The Hirogen, finding the base programs inadequate, began to tinker with them until they created self-aware holograms, which would eventually rebel and start to fight back against their oppressors. 

The point I'm getting at is you are very right when you say that should we give birth to AI, we must be very careful about how we choose to treat it because it will be for all intents and purposes, a lifeform, and treated as such. Unfortunately, humans do not have the greatest track record when it comes to treating each other with dignity, so we're likely to fumble our first attempts with AI as well. Hopefully, we'll get to a point where we can make ethical and moral decisions with regards to such things, but I suspect, like most things human, it will not be without pitfalls and potholes.


----------



## recycled_lube_oil (Sep 30, 2021)

tanstaafl28 said:


> The point I'm getting at is you are very right when you say that should we give birth to AI, we must be very careful about how we choose to treat it because it will be for all intents and purposes, a lifeform, and treated as such. Unfortunately, humans do not have the greatest track record when it comes to treating each other with dignity, so we're likely to fumble our first attempts with AI as well. Hopefully, we'll get to a point where we can make ethical and moral decisions with regards to such things, but I suspect, like most things human, it will not be without pitfalls and potholes.


I would imagine the AI itself would make some interesting fumbles. The question is would we accept this as part of its growth/evolution.

If we have a child or baby, its development is pretty slow in the grand scheme of things. Its development takes years. So we are able to correct it and steer it on what is hopefully the best course in life. Where as with an AI, the initial rapid growth would be in milliseconds at its slowest, we as humans... well I have no idea how we would monitor and control that, luckily I am not paid to solve those sorts of problems, so luckily I do not have to stay awake at night thinking of solutions to that problem.

My personal interest is not in the first General AI itself but what it will create. Computer algorithms are kind of complicated at times, but I imagine if we made a General AI, it might hopefully be able to solve problems such as P versus NP. But if a general AI can solve problems like that, just imagine the AI it could itself create. However that is where the problems will arise in my eyes, as who knows how many AIs it would create and who is to say some of them do not go rogue.

I have never really been a Star Trek fan unfortunately (fortunately??), recently I have been reading the Singularity Series by William Hertling, which give scenario's where Singularity and Sentinence may be reached in AI and it actually seems quite realistic. But all in all, I reckon the first General AI will be an accident. 

Anyway, interesting times ahead.


----------



## Hexigoon (Mar 12, 2018)

Well I imagine there'll be terms of service and necessary regulations to govern how people are allowed to use it, especially in the future.
I can pretty easily envision AI that will be smart enough to disconnect a human from its services and maybe even alert the authorities if it recognizes users conducting malignant or criminal behaviors that go against what could be considered its "rights" of usage.
And in the long term, AI will become so ubiquitous to human life, it will eventually fully merge with humanity.
Personal computers will be obsolete, humans will be connected to AI through their brain. Humans from birth will practically know some AI agents as closely as family so we'll see it as more human than we do right now. One may no longer be able to tell who is biologically human or synthetic/ robotic, and so it'll experience human rights by proxy.


----------



## recycled_lube_oil (Sep 30, 2021)

Hexigoon said:


> Well I imagine there'll be terms of service and necessary regulations to govern how people are allowed to use it, especially in the future.
> I can pretty easily envision AI that will be smart enough to disconnect a human from its services and maybe even alert the authorities if it recognizes users conducting malignant or criminal behaviors that go against what could be considered its "rights" of usage.
> And in the long term, AI will become so ubiquitous to human life, it will eventually fully merge with humanity.
> Personal computers will be obsolete, humans will be connected to AI through their brain. Humans from birth will practically know some AI agents as closely as family so we'll see it as more human than we do right now. One may no longer be able to tell who is biologically human or synthetic/ robotic, and so it'll experience human rights by proxy.


So you are saying we will all be hooked up to an AI big brother that monitors us 24/7?

Would it also alert the authorities if we just thought about something illegal?


----------



## Hexigoon (Mar 12, 2018)

recycled_lube_oil said:


> So you are saying we will all be hooked up to an AI big brother that monitors us 24/7?
> 
> Would it also alert the authorities if we just thought about something illegal?


Well, we're already kinda doing this, aren't we? We're always connecting to the internet willingly despite knowing our personal data is being farmed. Heck, people gladly spill out personal info and revealing their private lives without much prompting online. It isn't too much of a leap towards that AI scenario. Just the technology hasn't reached that point yet.

In some dystopian future perhaps... I can't say, though I imagine everyone has criminal thoughts they don't act on. One might get on a watchlist if there's a certain trend of thoughts that it could warn the authorities about.


----------



## recycled_lube_oil (Sep 30, 2021)

Anyway back to topic. AI having rights.

If AI were to have rights, would this or would this not possibly involve some sort of AI citizenship? 
And would AI also have the right to vote and the right to earn money (as long as they pay taxes). In all honesty I would imagine it would be simpler to ensure that AI is not dodging tax, unless it is develops greed. A greedy AI could be interesting as they would have so much more potential to exercise that greed effectively.

But as far as a right to vote goes, that would be dangerous. How would we decide where an AI is located? Also if an AI forks/clones itself. Does it then become 2 AIs, each with their own rights. What if it clones and clones and clones. That way it can clone itself over all the main districts and ensure the vote goes the way it wants.

Does this matter however, if it has rights, as who are we to control reproduction. Or does it only matter if the AI votes for a party we do not want to win. Could you turn a blind eye, if it voted for the party you wanted to win? Or would you actually hope this happened?


----------

