# Dilemma: Should we build machines more intelligent than human beings if we could?



## Speakpigeon (Jul 31, 2019)

This is a poll. Thank you to vote before posting comments.

It seems plausible, if not rather likely, that one day humans will be able to build machines more intelligent than themselves. This would likely have all sorts of consequences, some of them good, some of them bad, for humanity as a whole or for some, possibly many, individuals. However, assuming we could do it, either we would do it or we wouldn't. Further, once someone discovers how to do it, it becomes very difficult not to do it. Governments will want to do it, the military will want to do it, business will want to do it, and many people individually will be minded to do it, making the outcome almost inevitable that we will build machines more intelligent than human beings.

So, the question is, would you be in favour of building such machines or not?

And what would be your argument for or against, if you have one?
EB


----------



## 74893H (Dec 27, 2017)

Oh hey Pigeon, good to see you again. I don't think we can know for sure whether or not it's a good idea until after the fact. It is going to happen, it's inevitable. If laws get passed to ban it it'll just happen illegally. The concept of it is just too alluring to the majority of people. Like you said. 

My mindset when it happens would be to just let whatever happens happen and hope for the best, I wouldn't know what to expect. It all depends on how they program its AI, I suppose. If it somehow has humanlike compassion (I don't believe that's something that can't be programmed), it probably won't try to end us. But if it isn't programmed that way, and bases everything on pure calculation without any care for consequences, then we're probably doomed. I'd be totally okay with living alongside mechanised people, as long as they aren't overproduced, because then we really would become obsolete.

So I guess I'm on the fence. I don't see any reason to waste energy being against it when it is going to happen no matter what, and I'm also just as curious and excited as everyone else to see that kind of AI, but at the same time I'm nervous as to what might happen because of it. Neutral/I don't know.

We're getting pretty damn close, though. I think we'll see it happen in our lifetimes.

Edit:


----------



## Speakpigeon (Jul 31, 2019)

Pizzafari said:


> Oh hey Pigeon, good to see you again. I don't think we can know for sure whether or not it's a good idea until after the fact. It is going to happen, it's inevitable. If laws get passed to ban it it'll just happen illegally. The concept of it is just too alluring to the majority of people. Like you said.


I'm not sure most people are really happy with the idea of machines more intelligent than they are. I would on the contrary expect most people to be really worried at the prospect. There is obviously a rather small group of people who may really believe it would be a good idea but mostly it is probably people who have some vested interest in saying that it would be a good idea or even in the development of such machines. 

You would be aware that there is a very active industry of influencing public opinions for the benefice of economic or political interests and work on AIs is already an economic sector in itself.

I don't buy that it is inevitable. There are plenty people advocating against it and it is at least conceivable that they could convince all world leaders that it would be a bad idea. Developing AIs more intelligent than humans probably can't be done by lone individuals or even small organisations. So, if the big powers decided to stop it, they would probably succeed. 



Pizzafari said:


> My mindset when it happens would be to just let whatever happens happen and hope for the best, I wouldn't know what to expect. It all depends on how they program its AI, I suppose. If it somehow has humanlike compassion (I don't believe that's something that can't be programmed), it probably won't try to end us. But if it isn't programmed that way, and bases everything on pure calculation without any care for consequences, then we're probably doomed. I'd be totally okay with living alongside mechanised people, as long as they aren't overproduced, because then we really would become obsolete.
> 
> So I guess I'm on the fence. I don't see any reason to waste energy being against it when it is going to happen no matter what, and I'm also just as curious and excited as everyone else to see that kind of AI, but at the same time I'm nervous as to what might happen because of it. Neutral/I don't know.
> 
> We're getting pretty damn close, though. I think we'll see it happen in our lifetimes.


Once AIs more intelligent than us exist, I don't see how it wouldn't make most people at least potentially obsolete.

I'm not worried that AIs would try to end us as you suggest. AIs can be at the same time more intelligent than any human being and still remain machines that we use. Intelligence in itself doesn't have or give any purpose. The machines will just give their answers to our questions.
EB


----------



## Hexigoon (Mar 12, 2018)

Don't know what to tell you. I agree with what Pizzafari said, it is inevitable. The only way this won't happen is if we all go extinct before it occurs. Doesn't matter if people don't like the idea of it, can't stop the march of scientific and technological progress.

All we can really do right now is hope for the best. I'm not a luddite, technology has largely been a benefit, and if there was a machine truly smarter than any human then that would be very good to have. This planet could do with some actual intelligent leadership for once.


----------



## Handsome Dyke (Oct 4, 2012)

There are things humans can do that are considered intelligent, things machines cannot do, and the human mind does not work like a machine works, so the question is rather ambiguous until you explain what you mean by "intelligence." How do you even compare things that operate so differently? 

Furthermore, unless you think people can create machines that are smarter than themselves, these machines will be at most as intelligent as the people who create them, so they'll be smarter than only a fraction of humans (although it might be a large portion).


----------



## Blazkovitz (Mar 16, 2014)

It's better to make humans more intelligent.

"Computers make excellent and efficient servants, but I have no wish to serve under them." - Mr. Spock


----------



## Dustanddawnzone (Jul 13, 2014)

> I don't buy that it is inevitable. There are plenty people advocating against it and it is at least conceivable that they could convince all world leaders that it would be a bad idea. Developing AIs more intelligent than humans probably can't be done by lone individuals or even small organisations. So, if the big powers decided to stop it, they would probably succeed.


Maybe, but then again, how far into the process are we talking. "Intelligence" will likely depend largely on software where a large part of progress will be made based on algorithms (and possibly the information given to the algorithm if it's some advance set of learning algorithms). More and more software of more advance algorithms are progressively given to open source use and more advance hardware in always in the works to be developed and sold on the market. If controlled too close to the point where it is possible, it might not matter.


----------



## Speakpigeon (Jul 31, 2019)

Saiyed En Sabah Nur said:


> There are things humans can do that are considered intelligent, things machines cannot do, and the human mind does not work like a machine works, so the question is rather ambiguous until you explain what you mean by "intelligence." How do you even compare things that operate so differently?


From a dictionary...


> Intelligence
> 1. The ability to acquire, understand, and use knowledge.





Saiyed En Sabah Nur said:


> Furthermore, unless you think people can create machines that are smarter than themselves, these machines will be at most as intelligent as the people who create them, so they'll be smarter than only a fraction of humans (although it might be a large portion).


From my first post....


> It seems plausible, if not rather likely, that one day humans will be able to build machines _more intelligent than themselves_.


EB


----------



## Speakpigeon (Jul 31, 2019)

Dustanddawnzone said:


> Maybe, but then again, how far into the process are we talking. "Intelligence" will likely depend largely on software where a large part of progress will be made based on algorithms (and possibly the information given to the algorithm if it's some advance set of learning algorithms). More and more software of more advance algorithms are progressively given to open source use and more advance hardware in always in the works to be developed and sold on the market. If controlled too close to the point where it is possible, it might not matter.


Yes, but I don't believe that any learning can in itself give you intelligence. Information, yes, intelligence, no.

Learning information certainly doesn't in itself make you more intelligent. 

Learning more intelligent methods could but this would require somebody, or even something, to teach you more intelligent methods, and all any human can do is teach human-level methods and no one for now knows what intelligent methods consist of. 

And I also don't believe a machine could learn to become more intelligent than us even in a few millennia. What machines can learn is really low level stuff that may be useful to us but nothing that will make the machine intelligent, at least not in any reasonable time.

So, I think this would require us to understand our own intelligence and to be able to write algorithms doing the same thing. I am not even sure it is possible in principle. But, one day, why not.
EB


----------



## Speakpigeon (Jul 31, 2019)

Hexigoon said:


> Don't know what to tell you. I agree with what Pizzafari said, it is inevitable. The only way this won't happen is if we all go extinct before it occurs. Doesn't matter if people don't like the idea of it, can't stop the march of scientific and technological progress.
> 
> All we can really do right now is hope for the best. I'm not a luddite, technology has largely been a benefit, and if there was a machine truly smarter than any human then that would be very good to have. This planet could do with some actual intelligent leadership for once.


You don't think the risk would be sufficient to make this a bad idea?
EB


----------



## Handsome Dyke (Oct 4, 2012)

Speakpigeon said:


> From a dictionary...


 The definition does not answer the question about how to compare the intelligence of the two because machines do not acquire, understand, or use information the same way humans do. Humans acquire information through a nervous system; machines acquire it by being fed data through ports, electronic sensors, etc. 

How would you determine whether the entity that takes data through ports has greater or better ability to acquire information than the entity that sees, hears, and smells?


----------



## Dustanddawnzone (Jul 13, 2014)

> Learning information certainly doesn't in itself make you more intelligent.
> 
> Learning more intelligent methods could but this would require somebody, or even something, to teach you more intelligent methods, and all any human can do is teach human-level methods and no one for now knows what intelligent methods consist of.
> 
> And I also don't believe a machine could learn to become more intelligent than us even in a few millennia. What machines can learn is really low level stuff that may be useful to us but nothing that will make the machine intelligent, at least not in any reasonable time.


But at its base level, human intelligence is formed of neuronal information imputed and "computed" with a bunch of fuzzy relationships between connections. With greater knowledge of how this "computation" works, it makes sense that people would become better at not just simulating the processes that lead to intelligence but also at creating models from which more accurate assumptions can be further made especially if some sort of automata-like theory could be developed around it. At some points, we could make a machine which would be able to to the same things that a brain can.


----------



## Speakpigeon (Jul 31, 2019)

Saiyed En Sabah Nur said:


> The definition does not answer the question about how to compare the intelligence of the two because machines do not acquire, understand, or use information the same way humans do. Humans acquire information through a nervous system; machines acquire it by being fed data through ports, electronic sensors, etc.
> 
> How would you determine whether the entity that takes data through ports has greater or better ability to acquire information than the entity that sees, hears, and smells?


I _assumed_ that machines more intelligent than us could be built. 

And the question of how we could ascertain that they are more intelligent is really a different question. 

Nothing is ever exactly the same in real life and yet we assume they are. A cat is a cat is a cat. Well, not in reality, yet, we manage. Being intelligent is largely a question of being able to assume that two different things are somehow identical. For example, we are able to see two different things as beings two cats, i.e. as sharing the same property of being a cat, and this is essential because we then behave towards these things according to our assumption that these are cats, and therefore have certain common properties, such as typical behaviour.

We may come on day to believe that a machine is more intelligent than us whenever we meet one, just as we may come to believe that some other human being is more intelligent than we are, just by observing this person. Whether we are correct is another matter. There is no absolute method for verifying we are correct in our judgements.
EB


----------



## Speakpigeon (Jul 31, 2019)

Dustanddawnzone said:


> But at its base level, human intelligence is formed of neuronal information imputed and "computed" with a bunch of fuzzy relationships between connections. With greater knowledge of how this "computation" works, it makes sense that people would become better at not just simulating the processes that lead to intelligence but also at creating models from which more accurate assumptions can be further made especially if some sort of automata-like theory could be developed around it. At some points, we could make a machine which would be able to to the same things that a brain can.


Yes, I agree that this seems a real possibility. However, machines won't discover by themselves how to do it. We would have to figure out by ourselves how human intelligence works and then create machines doing the same thing. 

This isn't going to be something you can do without the big money and outside government scrutiny. If governments of all the major powers become convinced it would be a bad idea to do it, they will very likely be capable of stopping it.

So, now, the question is, what are the good reasons for doing it or not doing it. Here is an opportunity to convince the world's political leaders...
EB


----------



## IDontThinkSo (Aug 24, 2011)

I could build a machine that is more intelligent than you, but that's just because I'm more intelligent than you.


----------



## IDontThinkSo (Aug 24, 2011)

Now you're in the mood for a questionthat makes more sense.

Should we let smarter people than us build machines as smart as them?

Should we?

Should you?

Can you stop it if you would.


----------



## Charus (May 31, 2017)

How it is possible for a human to build a machine that is much more intelligent than It's creator? Unless we're talking about that machines can calculate stuff much easier/faster, but I'm sure that's not what the thread is talking about.

Thats like saying that God's creation can become more powerful than God (The creator) himself, which is most likely impossible.

Anyway, I think such concept is impossible, even if it was, I'm sure humans are not crazy enough to do that.


----------



## Strelnikov (Jan 19, 2018)

I say yes... what could possibly go wrong?


----------



## Speakpigeon (Jul 31, 2019)

Gothtron Void said:


> How it is possible for a human to build a machine that is much more intelligent than It's creator? Unless we're talking about that machines can calculate stuff much easier/faster, but I'm sure that's not what the thread is talking about.
> 
> Thats like saying that God's creation can become more powerful than God (The creator) himself, which is most likely impossible.
> 
> Anyway, I think such concept is impossible, even if it was, I'm sure humans are not crazy enough to do that.


The idea is that it is at least conceivable that some scientific research would establish what the human brain does. We don't need to establish everything that the brain does. We only need to discover the principle of the processing that the brain does.

Assuming this, then it is also plausible that computer scientists will be able to implement this principle on computers.

If we also assume this, then we have a computer working in principle like the brain. However, the human brain is physically constrained, by its size, its biology and its structure and organisation. Computers, on the other hand, could be made bigger and faster using new technology, like for example quantum computing. 

Computers can also easily exchange their data. Human brains cannot. We can communicate using essentially language, but it is slow and this isn't the same thing as just transferring the data we have in our mind to the mind of somebody else. But computers can do it. There are also other differences. 

Overall, it should be relatively easy to make machines, assuming they would have the same processing principles as the human brain, much more intelligent than any human simply because they would have a "brain" bigger, faster and more connected than the human brain will ever be.

I don't think we could disprove this as a possibility.
EB


----------



## 30812 (Dec 22, 2011)

Voted yes. If we fail to find a way to cope with our creations and become obsolete then we should probably go away.

Instead my concern is the definition of "intelligence" of the AI. Our own intelligence is pretty much a product of our physical body and our senses, and these are the natural rules behind our moral codes, our way of thinking, our perception of this world etc. On the other hand, the AI has nothing similar to us even on the most fundamental level. It feels no pain, it does not feel tired, it does not fear death and it certainly does not see the world the same way we do. What is "intelligent" by our definition is likely to be very different from the point of view of a machine which is beyond our comprehension. The "intelligent machine" may end up being something very different from the one envisioned by us.

I had similar feeling when I read about the human reactions to AlphaGo's "1 in 10,000" Move 37, I couldn't help but wondered whether we can ever truly understand what we have created and why AlphaGo made its move, and if one day we were told by an AI how we should do things, how can we judge whether it is reasonable and in our benefit? If it is so smart one day it can literally tell you to stick some pointy object against your throat with your right hand because it is in your benefit for some incomprehensible reason not even our professional doctors understand. The ultimate question becomes whether we are capable of trusting an alien.


----------



## Speakpigeon (Jul 31, 2019)

t4u6 said:


> Voted yes. If we fail to find a way to cope with our creations and become obsolete then we should probably go away.
> 
> Instead my concern is the definition of "intelligence" of the AI. Our own intelligence is pretty much a product of our physical body and our senses, and these are the natural rules behind our moral codes, our way of thinking, our perception of this world etc. On the other hand, the AI has nothing similar to us even on the most fundamental level. It feels no pain, it does not feel tired, it does not fear death and it certainly does not see the world the same way we do. What is "intelligent" by our definition is likely to be very different from the point of view of a machine which is beyond our comprehension. The "intelligent machine" may end up being something very different from the one envisioned by us.


Yes but there are two very different (conceivable) possibilities.

First, we build a machine and we understand how it works. Essentially, it would be a machine based on an algorithm we would have written.

Second, we build a machine and we don't completely understand how it works. For example, perhaps machines built using something like neural networks.

In the first case, the machine could be designed to be intelligent but without intentionality. Basically, it answers our questions, suggests intelligent solutions, and does whatever we ask it to do, which may include search and kill our enemies.

In the second case, we can't expect the machine to do our biding. Might do it but maybe not. However, even such a machine could be confined to providing answers to questions and provided analyses, scientific theories, etc.

The second possibility seems more obviously hazardous. It is for example at least conceivable that such a machine could conceive of a plan to get rid of us, for example by convincing us to do things which ultimately would result in our disappearance one way or the other. So, maybe we won't want to build such a machine after all.

However, I'm more interested in the first scenario. We understand what we do and so we do it. So, we would have machines vastly more intelligent than us, but machines that we will still be able to use without fear of being used by them. It would be just a tool, like a desktop computer, just vastly more intelligent than us. The question is, would that be a good idea? Why or why not? 



t4u6 said:


> I had similar feeling when I read about the human reactions to AlphaGo's "1 in 10,000" Move 37, I couldn't help but wondered whether we can ever truly understand what we have created and why AlphaGo made its move, and if one day we were told by an AI how we should do things, how can we judge whether it is reasonable and in our benefit? If it is so smart one day it can literally tell you to stick some pointy object against your throat with your right hand because it is in your benefit for some incomprehensible reason not even our professional doctors understand. The ultimate question becomes whether we are capable of trusting an alien.


There is a neat distinction between intelligence and intentionality. Our own intelligence is affected by our emotions but this is because we have emotions to begin with and we can't get rid of them. A machine, however, could be intelligent without any emotion and without any intention. It would only do our biding.

Good or bad?
EB


----------



## 30812 (Dec 22, 2011)

Yes if we are able to achieve control that would be great. There is however another theory somewhere I read which basically says a true Turing machine (or substitutes it with "intelligent" machine") will not let you know that it is capable of passing the Turing test (or whatever tests set out to determine it's intelligent or not). It's a bit dark but still a possibility. Hopefully the geniuses out there can sort it out.


----------

