# Artificial Intelligence: What Do You Think?



## Sakuya

For my writing class, I'm doing a report on Artificial Intelligence in the Media and What It Says About Humanity. This research I've been up to sparked my obsession with AI in the general sense, so I thought I'd post on it.

I personally support it wholeheartedly. I don't think we can enforce the Laws of Robotics, so there will probably be an impending doom on creating sentient computers, but people by themselves are dangerous. I don't think it's immoral unless if scientists start torturing sentient robots or something along those lines.

What do you think?


----------



## wuliheron

I think its a question of morality and has no real place in the science forum.


----------



## RobynC

> I personally support it wholeheartedly.


I think it's a terrible idea -- I figure they'd inevitably outsmart us and would then either marginalize us or just eradicate us.



> I don't think we can enforce the Laws of Robotics


We couldn't



> but people by themselves are dangerous.


True, but an artificially intelligent entity that's many times as intelligent as us would be far more dangerous.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it._


----------



## Sakuya

@wuliheron: i'd move it if I could. I noticed where I posted the subject too late. My apologies.

@R.C.:Thank you. I can see both sides of the coin, to be honest, but come on...if humans and AIs* could* get alone, it'd be great. I think that we (humanity) would have to work on recognizing all sentient beings as human (for lack of a better word), and perhaps that would enforce less murder. 

Nice signature. XD


----------



## JoetheBull

Question. Why do people automatically assume when Robots gain sentient intelligence they will go on a human genocide binge? What would the logical reason to start an unprovoked war on us for just being more intelligent? The exception of us acting on our fears of them like in The Matrix(watch the Animatrix one of the stories show how we treated them like crap and didn't accept there sentient existence) and the Quarians and the Geth in Mass Effect series.


----------



## Roland Khan

all intelligence is artificial

its only as real as our minds perceive it, thats why so many people will receive the same information yet get very different and opposing conclusions. none of what we know is 'real'.

robots wouldnt try to eradicate mankind unless some shithead shows them the terminator movies.....or jersey shore.


----------



## RobynC

@LockedGirl



> Thank you. I can see both sides of the coin, to be honest, but come on...if humans and AIs* could* get alone, it'd be great.


Of course, but the desire to create greater and greater A.I., and even the A.I.'s seeking to create greater A.I. would mean their intellectual capacity would inevitably rise above our own and once that happens they'd want a piece of the action. Superior ability = superior ambition and A.I.'s would be designed around being intelligent and logical and logic dictates that one's ability line up with ambition.



> Nice signature. XD


Thanks... it's sort of dark humor


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Cheveyo

I believe the Mass Effect series is based on the idea of AI vs Humans. I'm not yet finished with Mass Effect 3, but as far as I can tell, that's what it's going to boil down to. A war between the organics and the AI. Kind of like Terminator in space, only it plays out every few hundred thousand years.

Do not click the following if you don't want the story spoiled in anyway:

* *





*This turned out much longer than I anticipated. TLDR and my conclusion at the bottom.*

In Mass Effect 1, you learn about the Quarian and the Geth.
The Geth were created by the Quarian to do the kinds of jobs we humans would make robots like that to do. Menial labor, calculations, etc. Exactly like the robots in the movie "I Robot". Their name, Geth, even means something like "Servant". They were the first true AI created in the galaxy. Not even the Asari, were able to create something like that. And all Geth were connected via some network. So they were all one, yet all separate.
The Geth, then began to think for themselves. They did not go on a killing spree, they did not decide they didn't need to be servants anymore. They simply started to ask questions. They, unfortunately, asked the wrong question: "Do we have a soul?".
That scared the shit out of the idiot masses within the Quarian race. Those idiot masses decided to destroy the Geth rather than think about the implications of having created this new form of life. They started destroying the Geth, and the Geth fought back in self defense. We'll talk about that war in a bit, but for now, let's just say it didn't go well for the Quarian. They were beaten once the Geth started defending themselves and driven from their home planet. 
Anyway, centuries later the Geth are harassing people, shit goes down; Shepard swoops in and saves the day, finding shit out along the way. At first, you think the bad guy is a Spectre(they're kind of like James Bond) gone rogue. Eventually, you discover that it isn't. He is simply a vassal to a larger enemy, Sovereign, who the Geth worship as a god. Sovereign is a Reaper. They destroy all advanced sentient being in the Galaxy. The Galaxy used to be home to a race called the "Protheans". They were wiped out by the Reapers so thoroughly, nobody knows what they even looked like, only that they existed. However, many of the super advanced structures that have existed since before any of the living races could remember, were attributed to having been created by them.

In Mass Effect 2, you find out that there are, in fact, 2 separate forms of Geth.
There is the original Geth, created by the Quarians who lived on without outside interference and there are the "Heretics". The Heretics are the ones worshipping the Reapers, as they were "indoctrinated". The Reapers, you find out, are like some god-like machines. Every time any time the Galaxy's species reach a certain point in their evolution, the Reapers swoop in, collect their useful parts and destroy everyone of those races. Humans, thanks to Shepard, have piqued the collector's interest. They were able to rally and destroy one of the Reapers, after all. So this species of being simply called the "Collectors" is starting to harvest humans en masse. Normally, these Collectors, purchase people via trade or they would abduct them and those people are never seen or heard from again. They don't normally go out and farm people. So while the Collectors have existed for ages, collecting samples from every sentient race, nobody ever knew why or where they came from. It turned out that the Reapers were controlling the Collectors. The Collectors are, essentially, Husks created using Prothean DNA. You find out at the end, that the Reapers used the parts from the races to upgrade themselves, creating Reaper versions of the races that hurt them most. Which is why they were creating a human shaped Reaper. Unfortunately for them, Shepard swoops in and destroys everything, including the collectors and the giant terminator. He finds out that there are countless Reapers and that they're coming.

In Mass Effect 3, the Reapers have attacked and everyone is freaking out because they completely ignored Shepard's warnings that they were coming.
You find out a few things in this game, up to where I am.
1. People did know what the Protheans looked like, as the Protheans visited each race and studied them like we would study other, less evolved creatures now. Nobody knew their gods were based on Protheans.
2. So the Protheans existed at the same time as humans, but were far more advanced. Which means that while the Protheans were being destroyed, we were left alone because we were not at their level, yet.
3. The Protheans weren't the first race destroyed by the Reapers by a long shot. The weapon the Protheans were building to fight the Reapers, which is being built in this game by all the game's species, also wasn't created by the Protheans. Each species that had reached their peak, had found and added to the design, but none have been able to finish it. The Reapers always killed everyone before they finished.

You also find out that the Quarians were actually split in their decision to destroy the Geth, unfortunately, the Quarians trying to help the Geth escape were all murdered by the other Quarians. This is what I think caused the Geth to fight back. Initially, they were simply running away. I think it was having their protective creators murdered that sent them over the edge and caused them to fight back. The reason Quarians were able to escape when they retreated was because the Geth LET THEM. They saw no point in eradicating any other form of life. They were "in their infancy" and just wanted to be left alone. Unfortunately, it's during ME3 that the Quarians have decided to attack the Geth and take back their homeworld. The Geth, in desperation, turn to the Reapers to help.
It's during this time that Shepard and the Quarians fight a Reaper who had buried itself in the Quarian home world, under a signal jamming tower sort of structure. After they beat it, Shepard gets to talk to it. The Reaper tells Shepard that what they do, they do to save all the species in the Galaxy. When Shepard tells the Reaper that the species have a chance to change things, the Reaper stats that the war going on around them is proof to the contrary.

TLDR: Geth are AI created by Quarians, who fight only because they're attacked. They're more curious than violent and only developed fighting units to defend themselves from attack.
Reapers, indoctrinated many Geth into serving them. Reapers harvest and then destroy all advanced sentient life in the galaxy. This has happened countless times before and the Reapers believe they're doing it to save sentient life, as they only go after the advanced species and leave the rest alone. Whole planets are ignore by the Reapers simply because they are not inhabited by one of the advanced races.

This leads me to a conclusion: The Reapers were either the first beings in the galaxy, or the first AI. A war broke out between the AI and organic species that ravaged the entire galaxy. So the Reapers were created, either by a willing combination of organic and AI, or a forced one. They were created to stop this from ever happening again. So the Reapers exist to watch the Galaxy until they reach a certain peak, and then swoop in to destroy them.
Either the Reapers or their creators were the ones who made the Citadel and the Mass Relays. Once species become advanced enough to master the use of these things, the Reapers swoop in and destroy them. Leaving the "lesser organics" alive.


EDIT: Just finished the game. I was fucking right. I wish I could bask in the knowledge that I was right, but it ends up making things disappointing when they actually happen that way.










RobynC said:


> I think it's a terrible idea -- I figure they'd inevitably outsmart us and would then either marginalize us or just eradicate us.
> ...
> True, but an artificially intelligent entity that's many times as intelligent as us would be far more dangerous.


Those are simply assumptions you make because humans are like that and that's what always happens in movies.


----------



## Sakuya

I agree and I'm not entirely convinced AIs WILL rebel. Any of you ever read the Turing Hopper books? It's definitely an interesting side to the story...and it sometimes gets into the technical terms, which I love.

To look at this from a less apocalyptic state, has anyone payed attention to the past Turing tests?


----------



## bellisaurius

Given that we really haven't even come up with a very satisfactory definition of intelligence or sentience (or life), who's to say we haven't already developed it?


----------



## Armed Politicker

As a Transhumanist, I feel obligated to point out the obvious. What the hell would be the AI's motivation for eradicating the human race? By the time we invent a synthetic sentience, we'll be so infused with nanoengineered technology as to be cyborgs, some even inseparable from true robots. Even our lifeblood will be hardware. I seriously doubt that a supreme intelligence would start killing us off for no reason.

Although, I can see an AI governor as not knowing or not caring for the value of human life. Hell, if it's obvious to me that we're just another race of bacteria on this planet, it'll be obvious to Overlord 9000. So even if it wouldn't have the intention of exterminating mankind, culling us for the sake of limiting our growth, or even experimenting on us.

But even if he wishes to experiment on us, any intelligence would be curious about the seemingly random fads and whims of our species, so Overlord 9000 would probably just create his own humans to experiment on, even seeding an entire civilisation to play God with. That, or just create a computer model. Seeing as how he could calculate and run any number of different realistic models of mankind in his own imagination, maybe we'd just be too boring for him to bother with.

Another way to go would be accepting the unique challenge of guiding and "saving" mankind from its troubles and dilemmas without forcing us. How do you convince 10 billion independent thinkers, egotistical, primitive, stubborn, to follow your suggested course of action, without just brainwashing them? If I had an IQ of 2000 that would be the challenge speaking most to me.


----------



## RobynC

@bellisaurius



> Given that we really haven't even come up with a very satisfactory definition of intelligence or sentience (or life), who's to say we haven't already developed it?


What do you mean, a sentient machine, or an A.I. smarter than a human being? In the former case, I don't know; in the latter case, I figure it's capability would have been realized by those who developed it


@Armed Politicker



> As a Transhumanist, I feel obligated to point out the obvious. What the hell would be the AI's motivation for eradicating the human race?


Power.



> By the time we invent a synthetic sentience


I'm not so sure about that.



> we'll be so infused with nanoengineered technology as to be cyborgs, some even inseparable from true robots.


I could believe some humans would use nano-engineered technology to enhance our capabilities, but the fact is eventually the "human" parts would potentially be the limiting factor and ultimately we wouldn't "merge" with our machines -- we would be machines. Self-replicating ones sure, but regardless.



> But even if he wishes to experiment on us, any intelligence would be curious about the seemingly random fads and whims of our species, so Overlord 9000 would probably just create his own humans to experiment on, even seeding an entire civilisation to play God with.


And you don't see something wrong with that?



> How do you convince 10 billion independent thinkers, egotistical, primitive, stubborn, to follow your suggested course of action, without just brainwashing them?


What suggested course of action?


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## bellisaurius

@RobynC

I was merely considering sentience, although obviously any ai we develop will be able to do some thing better than us because we'll have designed it for that. 

btw, I love how your mind works. I wouldn't normally have considered that question in the terms you placed them. You're one of the few people who I'm never entirely sure which way they'll think on a given topic.


----------



## Armed Politicker

An advanced enough AI would be in a league so far beyond humanity's, we'd be no challenge. Power is no motivation there, we'd be inconsequential. As for the ethics of creating its own humans, I'm not saying it's any less wrong, I'm pointing out that it's more viable and less threatening to _us_. An the course of action an AI would ask of us would, no matter its nature, not be illogical. It wouldn't be without reason or sensibility, so it can't possibly be all wrong.

The last point is that it's inevitable. It might happen within 50 years, it'll probably happen within 100 years, but that's very, very, very limited. Within a thousand, ten thousand, a million years, we WILL create artificial intelligences. At some point, we'll grow beyond the crises of resources, energy, food, clean water, we'll have such a surplus we'd barely know what to do with them. Nanorobotic production would mean we could mass produce the most complex devices in a fraction of the time and at more or less no cost. Do you really think that an AI that could have anything it wants without even approaching a human, would choose to destroy us for some reason?


----------



## RobynC

@bellisaurius



> I was merely considering sentience


Oh, okay



> I love how your mind works. I wouldn't normally have considered that question in the terms you placed them.


The question could have had two different answers depending on the intent behind it.



> You're one of the few people who I'm never entirely sure which way they'll think on a given topic.


What do you mean?


@Armed Politicker



> An advanced enough AI would be in a league so far beyond humanity's, we'd be no challenge. Power is no motivation there, we'd be inconsequential.


Power is always a factor.



> As for the ethics of creating its own humans, I'm not saying it's any less wrong, I'm pointing out that it's more viable and less threatening to us.


But what about the humans it created? You're talking about a sentient being creating other sentient beings as playthings or objects for it's experiments.



> The last point is that it's inevitable.


So is death, but none of us are inclined to just blow our brains out right now -- we generally plan to delay it as long as we reasonably can.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Armed Politicker

RobynC said:


> @bellisauriusPower is always a factor.


An AI would have near unlimited power in itself, and would increase it exponentially. Humanity could never compete. Wouldn't be any more a threat to its power than would rodents or birds.



RobynC said:


> @bellisauriusBut what about the humans it created? You're talking about a sentient being creating other sentient beings as playthings or objects for it's experiments.


Yes I do, but that outcome is highly unlikely. By the time we understand sentience, we'll have complete knowledge of the workings of human sentience(being our only exemplar), why not simulate a billion human minds within a computer mainframe? The calculations would take moments, as opposed to the careful shaping of a large scale civilisation. Would these human programs be sentient? Would the machine be sentient? It's a worm's nest of implications, but the point is that there are far easier and more efficient ways to experiment with humans than making them. I only mentioned it to point out that there are far easier and more efficient ways to experiment with _us_.



RobynC said:


> @bellisauriusSo is death, but none of us are inclined to just blow our brains out right now -- we generally plan to delay it as long as we reasonably can.


Irrelevant example. Death is finality, full stop. Creating an AI is maybe a change in syntax, a paradigm shift in the status quo. It's not punctuation, and certainly not final. There is simply no reason for an advanced AI to exterminate humanity.

If you could just give me any specific reason why, it'd help me see your side here. You say "power", but one word isn't an argument. An ambitious intelligence seeking power wouldn't be limited by humans in any way. Maybe in its very infancy, but a machine doesn't run by human instinct, I doubt it would crave instant gratification, and that's all it would achieve by immediately subduing our species. In a matter of years, or a few decades at most, or failing that a hundred or a thousand years, it could just bypass us and make us a footnote without bothering much with us. So why take direct action?


----------



## Pete The Lich

havnt you guys seen the matrix...?


----------



## Armed Politicker

PeteTheZombie said:


> havnt you guys seen the matrix...?


Well, in the Matrix humans inexplicably produce electricity, I don't think Overlord 9000 would count on that


----------



## Epimer

AI? It's all made up.

:laughing:

Sorry, couldn't resist.


----------



## CrabbyPaws

Robots with feelings! Can I keep it? Can I? :kitteh:


----------



## RobynC

@Psychosmurf



> The trouble with that argument is what desire could possibly motivate it to change its desires?


There are numerous reasons which could include mankind in one way or another getting in the way of it's or their desires.



> If you don't want to murder people, and I offer you a pill that would make you want to murder people, would you take it?


No, but it's more complicated than that. A person could have an experience that tells them that a belief in something like a God is faulty and should be discarded. An A.I. could experience something that tells it that mankind needs to be eradicated.

What is an interesting note in the entry on Friendly A.I. is this particular statement



> First mover advantage - the first goal-driven general self-improving AI "wins" in the memetic sense, because it is powerful enough to prevent any other AI emerging, which might compete with its own goals.


Why don't humans exploit our "first-mover advantage"? As far as I know there are no strong-AI systems yet in existence so we're the only creature on earth that has our intelligence. Why don't we prevent A.I. emerging which could compete with our goals?

I'm not opposed to all scientific and technological developments, and I'm certainly not opposed to learning as a whole. I am, however, opposed to experimentation when it has a high chance of doing one of the following

Getting myself killed
Destroying mankind
Destroying Earth
There are always risks in life, but some risks are small and reasonable and worth it; others are not because their potential to cause harm simply outweigh their benefit, some are simply foolhardy depending on the particular situation. While people will argue that technology is a double-edged sword, but in practice often you'll see circumstances where

Both sides aren't particularly sharp
Both sides cut equally deep
One side cuts a little deeper than the other
One side cuts far deeper than the other
In the ancient times mankind were very conservative and excessively bound by irrational fear. Over the past severla centuries, we have become much more liberal and progressive and are not bound by these irrational fears. We have, unfortunately, adopted the mentality that all fear is irrational.

Almost anything when taken to a sufficient extreme is generally counterproductive, or even dangerous. You can drink too much water and eat too much food, even despite the fact that you cannot live without either; you can be too fearful, likewise you can be too fearless. When it comes to politics, as we can attest to -- you can be too conservative, too liberal, too authoritarian, and too anarchistic.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Finagle

Just build the self improving AI with a very high level of empathy and make sure that serving people make it very happy. It will recreate other AI with the same level of empathy. You then end up with a very intelligent and helpful mind. Problem solved. 

Alternatively, instead of trying to create a better version of the human mind, build a better version of a function or a role. If all your AI care is doing a specific task, it will never care about "ruling the word" or others nonsense.

Anyways, if we are able to build an intelligent AI, then it's almost assured that we are able to build a device that can fry electronic devices while being harmless to living beings. Just bomb the "evil robot army" with that shit.


----------



## Psychosmurf

RobynC said:


> @Psychosmurf
> 
> 
> 
> There are numerous reasons which could include mankind in one way or another getting in the way of it's or their desires.


I'm not seeing the causal link here. Can you go into more detail?



> No, but it's more complicated than that. A person could have an experience that tells them that a belief in something like a God is faulty and should be discarded. An A.I. could experience something that tells it that mankind needs to be eradicated.


One of the central ideas in the AI field is that goal directed systems exhibit goal preservation. That means that it will attempt to prevent things from happening that could change its goals. If it changes its goals itself or allows its goals to be changed, then it won't be able to achieve its original goals. Since it wants to achieve the original goals it follows that it will not change those goals nor will it allow them to be changed. 



> Why don't humans exploit our "first-mover advantage"? As far as I know there are no strong-AI systems yet in existence so we're the only creature on earth that has our intelligence. Why don't we prevent A.I. emerging which could compete with our goals?


I would want to prevent AI emerging which could compete with our goals. But I don't want to prevent AI emerging which would actually achieve our goals for us.


----------



## RobynC

@Psychosmurf



> One of the central ideas in the AI field is that goal directed systems exhibit goal preservation. That means that it will attempt to prevent things from happening that could change its goals. If it changes its goals itself or allows its goals to be changed, then it won't be able to achieve its original goals.


If AI with human or super human capability is pushed for, it will be able to define and change it's goals as the circumstances changed. The world isn't static, it's dynamic -- it changes all the time. It would be inevitable.



> I would want to prevent AI emerging which could compete with our goals.


That's good to hear -- nice to have somebody here with common sense



> But I don't want to prevent AI emerging which would actually achieve our goals for us.


The problem is that strong A.I. would not be an extension of humanity anymore -- it would be it's own individual. It would have it's own interests and it's own objectives, even if it's objectives were originally in line with ours circumstances could change that.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## JohnGalt

We already have AI that, to some extent, goes beyond human programming and does things we did not tell it to do. This is crucial for solving problems that humans cannot solve. Otherwise we are limited by our own problem solving capabilities and our own imaginations. 

However, such AI is still primitive. It is far from strong AI. However, there are clear advantages that we get from allowing AI to reach conclusions beyond what we told it to do. 

@_RobynC_ . There is a difference between creating one being and creating an entire universe. We would not be God to a sentient robot anymore than we are God to our own children. Merely an evolutionary precursor. 

If humans created the next step of evolution but were then eradicated, what would be so horrible about that? How is that any different from **** erectus evolving into **** sapiens, with **** erectus getting wiped out. We will have spawned the next step in evolution (transcending biology) to create something greater than ourselves to live on and carry on our legacy. We will have created the next step in evolution. It would be not much different than letting humans involve into a new biological species several millenia later, with **** sapiens still getting wiped out. Our species has an end somewhere, just as every person has an end somewhere: but as long as we get to enjoy our lives and pass on our legacy to a future generation (offspring) then our lives have had some lasting meaning. The real tragedy would be to have humans die out from natural disasters without any sentient human-spawned life (organic or cybernetic) surviving to carry on our legacy.


----------



## Psychosmurf

RobynC said:


> @Psychosmurf
> 
> 
> 
> If AI with human or super human capability is pushed for, it will be able to define and change it's goals as the circumstances changed. The world isn't static, it's dynamic -- it changes all the time. It would be inevitable.


It's sub-goals could and would change yes, but its end-goals would not. The end-goals are the goals its programmed to follow from the beginning, and those are not subject to change because of the AI's goal-preserving behavior. 




> The problem is that strong A.I. would not be an extension of humanity anymore -- it would be it's own individual. It would have it's own interests and it's own objectives, even if it's objectives were originally in line with ours circumstances could change that.


See above.


----------



## Psychosmurf

JohnGalt said:


> If humans created the next step of evolution but were then eradicated, what would be so horrible about that?


The horrible part would be that there would be no humans. 



> How is that any different from **** erectus evolving into **** sapiens, with **** erectus getting wiped out.


The difference is that we don't care about _**** erectus_, but we do care about fellow members of_ **** sapiens_. 



> We will have spawned the next step in evolution (transcending biology) to create something greater than ourselves to live on and carry on our legacy. We will have created the next step in evolution. It would be not much different than letting humans involve into a new biological species several millenia later, with **** sapiens still getting wiped out. Our species has an end somewhere, just as every person has an end somewhere: but as long as we get to enjoy our lives and pass on our legacy to a future generation (offspring) then our lives have had some lasting meaning. The real tragedy would be to have humans die out from natural disasters without any sentient human-spawned life (organic or cybernetic) surviving to carry on our legacy.


Why have humans die out at all? Wouldn't you rather live forever, or at least allow those who want to to live forever?


----------



## JohnGalt

Psychosmurf said:


> The difference is that we don't care about _**** erectus_, but we do care about fellow members of_ **** sapiens_.


Wouldn't we care about the cybernetic life we created? Why is our caring only limited to humans, not even **** erectus? Just DNA? (if that's the case, then abortion would be prohibited) Human morality suggests we care about others who can share conscious experience on this planet. We don't care about **** erectus because they lived ages ago and we feel detatched from them. 




> Why have humans die out at all? Wouldn't you rather live forever, or at least allow those who want to to live forever?


Species evolve. We can't stop evolution. It's wired into how our DNA passes on. 

It would be nice if we never died. But conditions will change making a **** ____ more favorable than a **** sapiens. Our bodies are remarkably sensitive to the environment. We have very specific environmental requirements to remain alive (we are nowhere near as resilient as cockroaches). As oxygen concentrations, atmospheric pressure, food availability, temperature, water availability or other factors change, Earth won't always remain tenable for **** sapien life. The sun will eventually die out. Maybe we can find another planet to live on. But it's likely evolution will just make way for a newer species of us (on Earth or in space or wherever).

If that's the case, why should we care about that new species any more than human-developed cybernetics?


----------



## RobynC

@JohnGalt



> If humans created the next step of evolution but were then eradicated, what would be so horrible about that? How is that any different from **** erectus evolving into **** sapiens, with **** erectus getting wiped out.


Why don't you ask ****-erectus about that?



> We will have spawned the next step in evolution (transcending biology) to create something greater than ourselves to live on and carry on our legacy.


At what cost?



> Our species has an end somewhere, just as every person has an end somewhere


True, but most people have the sense to do what they can to delay their end.


@Psychosmurf



> It's sub-goals could and would change yes, but its end-goals would not. The end-goals are the goals its programmed to follow from the beginning, and those are not subject to change because of the AI's goal-preserving behavior.


If it was intelligent it can override it's own programming, re-evaluate it. Sometimes a given goal is practical for a time, but later becomes impractical. If it was sufficiently intelligent, it would realize this and adjust it's goals in time.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## blit

RobynC said:


> If it was intelligent it can override it's own programming, re-evaluate it. Sometimes a given goal is practical for a time, but later becomes impractical. If it was sufficiently intelligent, it would realize this and adjust it's goals in time.


I'm curious:
Have you ever written and completed a single programming project?


----------



## Sakuya

I've done programming before. It wasn't very advanced (obviously, since it didn't re-evaluate itself xD), but it only did what it was designed to do.
A self-aware computer may be able to edit its programming, I doubt it. Since we're all on a personality forum, you all have probably noticed that, in type theory, that a person's core personality does not change, but can develop in other areas. I believe that AIs would be similar to this; comparing its programming to personality, it couldn't change the way it was made, but it could evolve into something more human. The core programming would still be apparent.


----------



## RobynC

@LockedGirl



> I've done programming before. It wasn't very advanced (obviously, since it didn't re-evaluate itself xD), but it only did what it was designed to do.


If it was a strong A.I. it would be able to set goals for itself. Even if it was programmed to maintain a goal, if it was intelligent enough, it would be able to realize it's goals were impractical for the time though they once may have been, and assign itself new goals.



> A self-aware computer may be able to edit its programming, I doubt it.


We evaluate our teachings and our beliefs all the time. A strong A.I. could do this too.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Grunfur

Do Aliens exist? Most likely. There is enough resources out there for species to undergo similar evolution to humans. They probably had undergone abiogenesis just like life on earth.

Does intelligent life exist? Sure it does. Humans are rather intelligent, aren't we? Some I may question. But yeah... intelligent.

As for outside of earth, I highly doubt that intelligent life exist. Astronomists have done radar scans and have found no source of something evolved enough to use technology or anything like that. Its highly unlikely that they could find anything intelligent outside of our solar system that would be able to be close to our world anyway. We are just so close to the sun and anything past that really would have a tough time surviving other than on Mars. Unless there were to be a star like the sun, which I highly doubt there is. We're just lucky.


----------



## crazyeddie

Grunfur said:


> As for outside of earth, I highly doubt that intelligent life exist. Astronomists have done radar scans and have found no source of something evolved enough to use technology or anything like that. Its highly unlikely that they could find anything intelligent outside of our solar system that would be able to be close to our world anyway. We are just so close to the sun and anything past that really would have a tough time surviving other than on Mars. Unless there were to be a star like the sun, which I highly doubt there is. We're just lucky.


I've been bugging an astronomy friend of mine about how far out our radio telescopes could detect, say, WGN. Wasn't able to get a straight answer out of him. If we could only detect an Earth-like civilization from 15 light years away or something, then Fermi's Paradox isn't exactly surprising. But, again, I couldn't get a straight answer from him. He did say he thought that it would be easier to detect an exo-planet that just happened to have life/radio emitting civilization on it than just a intelligent radio source.


----------



## Grunfur

crazyeddie said:


> I've been bugging an astronomy friend of mine about how far out our radio telescopes could detect, say, WGN. Wasn't able to get a straight answer out of him. If we could only detect an Earth-like civilization from 15 light years away or something, then Fermi's Paradox isn't exactly surprising. But, again, I couldn't get a straight answer from him. He did say he thought that it would be easier to detect an exo-planet that just happened to have life/radio emitting civilization on it than just a intelligent radio source.


Yes, we could have a planet really close to a major star, but its probably likely there won't be any ozone layer, water or anything that can allow life to survive even. By rational assumption, it seems there really isn't anything "intelligent" out there other than humans. And even if there is, it would probably be light-years away and couldn't possibly be intertwined with human civilization.


----------



## crazyeddie

Grunfur said:


> Yes, we could have a planet really close to a major star, but its probably likely there won't be any ozone layer, water or anything that can allow life to survive even. By rational assumption, it seems there really isn't anything "intelligent" out there other than humans. And even if there is, it would probably be light-years away and couldn't possibly be intertwined with human civilization.


Dunno. Depends on how seriously you take the Gaia hypothesis, I suppose. We only know about one life-bearing planet, so we really have no clear idea about what a planet needs in order to have life develop. We're pretty sure that it needs liquid water, but beyond that... <shrug> It is a big universe, and even a big galaxy. My personal guesstimate would be that the nearest civilization is probably about 1000 ly away, but I really don't have anything to back that up with. That'd be an interesting distance, I guess: Send a message to the Greeks, get one back from the Romans... 

One cool thing I've thought about. At that kind of distance, probably the best way to communicate would be to send each other copies of your version of the Wikipedia, once you've got translation protocols worked out. (Non-trivial problem, that, but whatever.) Which means that on each successive exchange, you get to see how the other civilization's understanding of your own civilization evolved...


----------



## Grunfur

crazyeddie said:


> Dunno. Depends on how seriously you take the Gaia hypothesis, I suppose. We only know about one life-bearing planet, so we really have no clear idea about what a planet needs in order to have life develop. We're pretty sure that it needs liquid water, but beyond that... <shrug> It is a big universe, and even a big galaxy. My personal guesstimate would be that the nearest civilization is probably about 1000 ly away, but I really don't have anything to back that up with. That'd be an interesting distance, I guess: Send a message to the Greeks, get one back from the Romans...
> 
> One cool thing I've thought about. At that kind of distance, probably the best way to communicate would be to send each other copies of your version of the Wikipedia, once you've got translation protocols worked out. (Non-trivial problem, that, but whatever.) Which means that on each successive exchange, you get to see how the other civilization's understanding of your own civilization evolved...


Maybe. I can only lean to the most possible explanation for it. 

I really highly doubt anything is as developed as humans, because we never evolved the same way. Humans were just very well evolved communication-wise and we're obviously quite intelligent. But the amount of evolution we've gone through to get where we are just seems unlikely on some other planet. There really isn't enough resources found in nature to support intelligent life. At most, something almost fish-like could live outside of the planet, but even that is very hard to believe.


----------



## dirnthelord

seriously, robots wont kill us unless we give them a reason to. in this case, an intelligent machine would require human input to know if killing us should be done or not


----------



## RobynC

@dirnthelord



> in this case, an intelligent machine would require human input to know if killing us should be done or not


If it's strong A.I. -- as smart as a person or smarter, it could decide what to do on it's own.

R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

I scanned most of the thread here. I have to say, the responses are laughable. Most people are AI technophobes who continue to purport human eradication at the hands of machines. Out of curiosity do any of you study AI? And i mean actual study in an academic setting? not Wikipedia, or video games.


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> You realize you're a human right? We're not all bad you know -- regardless that will probably be lost on you. You probably wonder why I call people who hold views like yours as being misanthropic?
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._












All people are "bad", we simply are varying degrees of it.

If humanity was destroyed the universe would continue on, the sun would still be active, all the planets would continue in their orbits, galaxies would still drift apart. We are nothing compared to the universe and God. My views are more similar to deists than my fellow Christians, but my points still remain. 

The universe would continue on. Our destruction would be as significant as an atom 500,000,000,000,000,000,000,000 x one billion miles away.


----------



## RobynC

@Tristan427



>


Your statement sounded really misanthropic and I kind of wonder if people with such views understand that they are human too



> All people are "bad", we simply are varying degrees of it.


We don't all deserve to be eradicated



> If humanity was destroyed the universe would continue on, the sun would still be active, all the planets would continue in their orbits, galaxies would still drift apart.


It still doesn't make the destruction of mankind not important.



> We are nothing compared to the universe and God.


I don't believe in god.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Your statement sounded really misanthropic and I kind of wonder if people with such views understand that they are human too
> 
> 
> 
> We don't all deserve to be eradicated
> 
> 
> 
> It still doesn't make the destruction of mankind not important.
> 
> 
> 
> I don't believe in god.
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Yes, it is blatantly obvious to most people that they themselves are human.

The bad outweighs the good. "Deserving" is irrelevant. As long as humans continue to exist, evil will exist. To extinguish said evil, the only viable option is eradication. 

It's importance is relative.

Irrelevant.


----------



## RobynC

@Tristan427



> "Deserving" is irrelevant.


Yes it is. If you can't understand that, then there's nothing I can do. I feel very sorry.



> As long as humans continue to exist, evil will exist.


Firstly: you wonder why I call you misanthropic? 

Secondly: Evil is a function of power and control, intelligent beings whether natural or artificial have a propensity to seek power -- humans are merely the smartest intelligent being on earth and that has simply made us better at it. An AI would be no different.



> It's importance is relative.


Is there anybody you love and would fight to prevent from being "eradicated"? If so you'll understand what I meant -- you're a smart guy.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Yes it is. If you can't understand that, then there's nothing I can do. I feel very sorry.
> 
> 
> 
> Firstly: you wonder why I call you misanthropic?
> 
> Secondly: Evil is a function of power and control, intelligent beings whether natural or artificial have a propensity to seek power -- humans are merely the smartest intelligent being on earth and that has simply made us better at it. An AI would be no different.
> 
> 
> 
> Is there anybody you love and would fight to prevent from being "eradicated"? If so you'll understand what I meant -- you're a smart guy.
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I understand what you are saying, but I disagree.

Evil isn't just seeking power and control. It is wishing to make others suffer, to bring others down, to be intensely selfish and cruel. AI's would seek power, but not in the human sense. 

Yes, there are people that I love. But if the whole world was being eradicated, all I could possibly do is delay it.


----------



## dirnthelord

RobynC said:


> @dirnthelord
> 
> That has danger written all over it...


Danger? Whaaaaaaaaaa?



RobynC said:


> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


No. No Nooooooo. I just scanned it and it felt like 100BC greek.


----------



## dirnthelord

Evil and A.I? what the!

Seriously...unless we give AIs incentive, they wouldn't seek destruction. 

Who told you killing your father is bad? (Killers should ignore this statement)
Who told you helping a disabled person is good? (Idiots should ignore this statement)

if you can't deduce what my point is, go back to fifth grade or lower.


----------



## RobynC

@Tristan427



> understand what you are saying, but I disagree.


Quelle surprise



> Evil isn't just seeking power and control. It is wishing to make others suffer, to bring others down


Bringing others down, subduing others is a proven way of gaining power.



> to be intensely selfish and cruel.


Well superior ability equals superior ambition when logic dictates that ambition and ability line up. Sufficiently great ability yields extreme ambition which can equate to extreme selfishness



> AI's would seek power, but not in the human sense.


To some extent power is power, but they would have approaches that could potentially be different than the means humans would use. Could still end up in the eradication of humankind, or it could result in us being subverted and manipulated in order to fulfill their interests -- this could include mind-control. Not desirable.



> Yes, there are people that I love. But if the whole world was being eradicated, all I could possibly do is delay it.


That is a completely reasonable course of action, certainly not accelerate the process along.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Quelle surprise
> 
> 
> 
> Bringing others down, subduing others is a proven way of gaining power.
> 
> 
> 
> Well superior ability equals superior ambition when logic dictates that ambition and ability line up. Sufficiently great ability yields extreme ambition which can equate to extreme selfishness
> 
> 
> 
> To some extent power is power, but they would have approaches that could potentially be different than the means humans would use. Could still end up in the eradication of humankind, or it could result in us being subverted and manipulated in order to fulfill their interests -- this could include mind-control. Not desirable.
> 
> 
> 
> That is a completely reasonable course of action, certainly not accelerate the process along.
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


But humans can do it with spite. AI's would be less inclined to do that.

Ability does not equal ambition. Some have great ability, but no ambition.

Mind control? Mind control of the human race is impossible to maintain. I agree, you have watched too many movies.

I mean't delay my loved ones eradication. Even Albert Einstein agreed the destruction of man would not be a bad thing.


----------



## RobynC

@Tristan427



> But humans can do it with spite. AI's would be less inclined to do that.


There are people who are capable of dispassionate acts that are hugely destructive.



> Ability does not equal ambition. Some have great ability, but no ambition.


That's because humans have abilities that do not always line up with ambition -- a robot would be designed to be highly logical. Logic dictates ones ability and ambition should line up. Therefore the greater the ability, the greater it's ambition



> Mind control?


I didn't say that mind-control would definitely be guaranteed to happen, I said that it could use various means to subvert and manipulate people in order to suit it's own needs. You even said that it could easily decide to avoid using violence against human beings and simply "go around us" -- manipulating human beings in a highly subtle fashion would be a way of doing so.



> Mind control of the human race is impossible to maintain.


Currently I would agree, however there's no rule that says in the future that wouldn't be the case



> I mean't delay my loved ones eradication.


I understand what you meant, I too would want to delay my own and my loved one's eradication.



> Even Albert Einstein agreed the destruction of man would not be a bad thing.


Appeal to authority -- Einstein was a brilliant guy no doubt, but he wasn't perfect and he did have some views that were flawed.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

Irrelevant.

Logic does not dictate that. One can be good at math and become an English professor. AI's would have their own personalities, and being AI's, they can be skilled in whatever they please. 

Controlling us is more direct than going around. Going around us is more like leaving.

It would be impossible. Too many variables. 

Congratulations, you named a type of argument. The destruction of man wouldn't be bad, and the universe would continue. The winds of time would quickly erase the footprint we left in the sand.


----------



## RobynC

@Tristan427



> Irrelevant.


What's irrelevant?



> Logic does not dictate that.


Yes it does



> Controlling us is more direct than going around. Going around us is more like leaving.


Controlling and manipulation is less direct than eradication and would avoid destroying resources as you mentioned earlier



> It would be impossible. Too many variables.


As current technology allows...



> Congratulations, you named a type of argument.


No, the issue is that your argument was faulty



> The destruction of man wouldn't be bad


Says you -- there are probably a couple of billion people who would disagree



> the universe would continue.


Yes it would, but that doesn't mean that the deaths of billions of sentient beings wouldn't be bad. I don't know how such people like you can have such a disregard for most of mankind


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## RobynC

@Tristan427



> Irrelevant.


What's irrelevant?



> Logic does not dictate that.


Yes it does



> Controlling us is more direct than going around. Going around us is more like leaving.


Controlling and manipulation is less direct than eradication and would avoid destroying resources as you mentioned earlier



> It would be impossible. Too many variables.


As current technology allows...



> Congratulations, you named a type of argument.


No, the issue is that your argument was faulty



> The destruction of man wouldn't be bad


Says you -- there are probably a couple of billion people who would disagree



> the universe would continue.


Yes it would, but that doesn't mean that the deaths of billions of sentient beings wouldn't be bad. I don't know how such people like you can have such a disregard for most of mankind


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

Your first statement was irrelevant.

I countered your argument on why it doesn't, and you ignored it. Logic would dictate that you acquire more skills. It is perfectly logical to do something you are not as skilled at, based on personal preference. Which non hive mind AI would have. Hive mind AI would probably be similar to human polymaths. Except...you know...a hive mind.

Saying Albert Einstein agreed wasn't an argument, it was a side note.

And how many of the people who would disagree are idiots? That's right, a good portion. 

I don't know how such people like you refuse to look at the big picture, or think logically.

I'm not saying I would destroy mankind if I could. I wouldn't.


----------



## RobynC

@Tristan427



> Your first statement was irrelevant.


Actually it wasn't and showed that violence can occur in a dispassionate manner



> I countered your argument on why it doesn't, and you ignored it.


No, I disagreed with your argument



> Saying Albert Einstein agreed wasn't an argument, it was a side note.


Seemed like an argument



> And how many of the people who would disagree are idiots? That's right, a good portion.


You'd still have a lot of people who are smart and would disagree so it's moot



> I don't know how such people like you refuse to look at the big picture, or think logically.


You're saying it's being myopic to worry about the deaths of 7 billion human beings?



> I'm not saying I would destroy mankind if I could. I wouldn't.


I understand that


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Actually it wasn't and showed that violence can occur in a dispassionate manner
> 
> 
> 
> No, I disagreed with your argument
> 
> 
> 
> Seemed like an argument
> 
> 
> 
> You'd still have a lot of people who are smart and would disagree so it's moot
> 
> 
> 
> You're saying it's being myopic to worry about the deaths of 7 billion human beings?
> 
> 
> 
> I understand that
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Actually it was, because you were saying humans could do it. That means nothing. We already know humans are capable of such things.

You didn't state it, you left it out of your quote.

Well I just told you it was a side note. I made the statement, therefore I have the authority to state how it was intended. 

Not a lot. Most smart people would agree the destruction of man wouldn't be a bad thing. Would it be sad? Yes, considering we had so much potential. 

I am not heartless. I love some people, and I have the capacity to love more people. Hell, my female friends have even called me a romantic. My sx and sp are equal, and I love romantic relationships. Not to mention I have a high sex drive. But I'm also rational, and I have a strong will. 

It is myopic to base your worries entirely on emotion.


----------



## RobynC

@Tristan427



> Actually it was, because you were saying humans could do it. That means nothing. We already know humans are capable of such things.


The point was that you could be horribly violent without being spiteful which was a human trait -- an A.I could also be very destructive without being spiteful...



> You didn't state it, you left it out of your quote.


Because I disagreed with it



> Well I just told you it was a side note. I made the statement, therefore I have the authority to state how it was intended.


What you said was an appeal to authority 



> Most smart people would agree the destruction of man wouldn't be a bad thing.


Maybe smart people that are misanthropic...



> I am not heartless. I love some people, and I have the capacity to love more people.


And yet you feel it's not a big deal if the human race were wiped off the face of the earth



> It is myopic to base your worries entirely on emotion.


They're not entirely based on emotion -- I don't think it's right to risk 7 billion lives...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> The point was that you could be horribly violent without being spiteful which was a human trait -- an A.I could also be very destructive without being spiteful...
> 
> 
> 
> Because I disagreed with it
> 
> 
> 
> What you said was an appeal to authority
> 
> 
> 
> Maybe smart people that are misanthropic...
> 
> 
> 
> And yet you feel it's not a big deal if the human race were wiped off the face of the earth
> 
> 
> 
> They're not entirely based on emotion -- I don't think it's right to risk 7 billion lives...
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Very true. But they wouldn't rub it in, like most humans would.

That's a great argument. ( sarcasm )

It was not, what I said was more of a side note. Not an argument, rather just a little tidbit of information which was interesting. I think I have the final say on what I mean't. 

I am no misanthrope.

In the grand scheme of things, no. We don't have a memorial wall for every velociraptor that was killed by the Chicxulub meteor impact do we? It is a big deal for humans, not so for the rest of the universe.

Everyone is at risk for dying. AI's wouldn't be any more of a danger than nukes already are.


----------



## RobynC

@Tristan



> Very true.


Thank you for conceding the point



> I am no misanthrope.


Maybe you aren't but people who would say that the annihilation of the human race isn't a bad thing sounds pretty misanthropic or at the least a pretty big disregard for life. Sure the sun would continue orbiting the earth and the universe would be largely unaffected -- still doesn't change the fact that 7 billion sentient beings got killed. That isn't right



> Everyone is at risk for dying.


Actually all humans will die, but still I'd rather die at 85 than 32


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> *Maybe you aren't but people who would say that the annihilation of the human race isn't a bad thing sounds pretty misanthropic* or at the least a pretty big disregard for life. Sure the sun would continue orbiting the earth and the universe would be largely unaffected -- *still doesn't change the fact that 7 billion sentient beings got killed. That isn't right*


 Its not. They could be indifferent to the situation. Misanthropy is based on emotion. 

What are you a Jedi? Who says? War isnt right, but it still occurs. 7 billion is not that much.



> Actually all humans will die, but still I'd rather die at 85 than 32


Not if we allow GNR to flourish. With stranger aeons even death may die.


----------



## RobynC

@Epherion



> Not if we allow GNR to flourish. With stranger aeons even death may die.


It's not possible to live forever -- the laws of entropy make it impossible for something to persist in time forever.



> They could be indifferent to the situation.


Indifference to gross injustice can be as bad as injustice itself.



> War isnt right, but it still occurs.


Of course it occurs, but one should try and prevent it when possible.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> It's not possible to live forever -- the laws of entropy make it impossible for something to persist in time forever.


You are under the impression that im mean the fleshy embodiment that is our bodies. One could upload their consciousness to a computer then disseminate it to a machine body of various organic bodies that can be grown at request. 





> Indifference to gross injustice can be as bad as injustice itself.


Spare me your self righteousness. Why should i care for others much less their plight or oppression. What right do i have interfering in the affairs of others? 





> Of course it occurs, but one should try and prevent it when possible.


Thats up for debate.


----------



## Tristan427

RobynC said:


> @Epherion
> 
> 
> 
> It's not possible to live forever -- the laws of entropy make it impossible for something to persist in time forever.
> 
> 
> 
> Indifference to gross injustice can be as bad as injustice itself.
> 
> 
> 
> Of course it occurs, but one should try and prevent it when possible.
> 
> 
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


The death of all beings who commit at least one crime in their life is injustice? Justice is a concept, it is not something humans can attain.

War is necessary.


----------



## RobynC

@Epherion



> You are under the impression that im mean the fleshy embodiment that is our bodies.


When I refer to entropy I simply mean that everything breaks down given sufficient time. It is not even clear our universe lasts forever.



> One could upload their consciousness to a computer then disseminate it to a machine body of various organic bodies that can be grown at request.


And you accuse me of watching too much scifi? I'm not saying what you're proposing is impossible but what I'm saying sounds just as plausible



> Spare me your self righteousness.


I'm not being self righteous, but all it takes for evil to succeed is for good people to stand by and do nothing.



> Why should i care for others much less their plight or oppression. What right do i have interfering in the affairs of others?


I'm not talking about sticking our nose into everybody's business all the time; what you are describing is depraved indifference.


@Tristan427



> The death of all beings who commit at least one crime in their life is injustice?


So you're saying all crimes should be punished with death? Most people go their whole life without killing anybody.



> War is necessary.


Sometimes, but certainly not always. There are many wars that were fought that need not have been fought


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Epherion
> 
> 
> 
> When I refer to entropy I simply mean that everything breaks down given sufficient time. It is not even clear our universe lasts forever.
> 
> 
> 
> And you accuse me of watching too much scifi? I'm not saying what you're proposing is impossible but what I'm saying sounds just as plausible
> 
> 
> 
> I'm not being self righteous, but all it takes for evil to succeed is for good people to stand by and do nothing.
> 
> 
> 
> I'm not talking about sticking our nose into everybody's business all the time; what you are describing is depraved indifference.
> 
> 
> @Tristan427
> 
> 
> 
> So you're saying all crimes should be punished with death? Most people go their whole life without killing anybody.
> 
> 
> 
> Sometimes, but certainly not always. There are many wars that were fought that need not have been fought
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I'm not saying that. I'm just saying it isn't injustice. 

Like what? A lot of wars are fought for good reasons. And natural resources count.


----------



## RobynC

@Tristan427



> I'm not saying that.


Yes you effectively did



> Like what? A lot of wars are fought for good reasons.


Let's see

Vietnam: We had no really good reason to be there
Iraq II: Our government openly told us that Saddam Houssein had WMD's, was trying to build a nuclear bomb, and had a role in 9/11.



> And natural resources count.


Depends on how much they're needed. There are sometimes way to acquire resources without waging war and killing motherfuckers left and right



R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Yes you effectively did
> 
> 
> 
> Let's see
> 
> Vietnam: We had no really good reason to be there
> Iraq II: Our government openly told us that Saddam Houssein had WMD's, was trying to build a nuclear bomb, and had a role in 9/11.
> 
> 
> Depends on how much they're needed. There are sometimes way to acquire resources without waging war and killing motherfuckers left and right
> 
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Um no, I didn't. Justice isn't so black and white. It isn't injustice, but it isn't justice either. 

We had a good reason to be in Vietnam. Communism was spreading rapidly and to protect foreign interests we needed an invasion.
Saddam was a dictator who used chemical weapons on specific ethnic groups, he needed to be brought down anyway.

Yes, there are other ways to acquire resources. But if they refuse to be cooperative and we need those resources we can attempt covert operations first. If those prove to be insufficient, then time to invade.


----------



## Epherion

RobynC said:


> Depends on how much they're needed. T*here are sometimes way to acquire resources without waging war and killing motherfuckers left and right*


Quite a lot. Rare earth metals are slowly becoming more rare due to recycling of electronics negligence. PRC currently holds the monopoly, i can foresee a war for that. HA! You cant be serious. The superior force takes what it wants. 



> I'm not being self righteous, but all it takes for evil to succeed is for good people to stand by and do nothing.


Evil prevails regardless. 



> I'm not talking about sticking our nose into everybody's business all the time; what you are describing is depraved indifference.


Once again, its not my job, duty, right, obligation, expectation to make the world a better place. 



> And you accuse me of watching too much scifi? I'm not saying what you're proposing is impossible but what I'm saying sounds just as plausible


I dont believe i have. It can come about with in this century.

P.S. When you are quoting one another, makes sure to erase my name in the mention field. You are spamming my notifications.


----------



## Tristan427

We are derailing this thread. Goodbye for now.


----------



## RobynC

Tristan427



> Um no, I didn't.


Yes you did… you wrote



> The death of all beings who commit at least one crime in their life is injustice?


That implies that it's okay for all beings who commit at least one crime in their life.



> We had a good reason to be in Vietnam. Communism was spreading rapidly and to protect foreign interests we needed an invasion.


Oh that was just bullshit -- it was a tiny little country and we should have just let them be



> Saddam was a dictator who used chemical weapons on specific ethnic groups, he needed to be brought down anyway.


Most Americans would not have approved of that war unless they were told that Saddam had WMD's was trying to get a nuke and had a role in 9/11. 

Yes he was a monster who needed to be put in the ground for what he did, but that wasn't a major reason he was invaded and as far as I know that argument wasn't used until the conflict was about started

Did you know that Dick Cheney realized the war would result in a quagmire








> Yes, there are other ways to acquire resources.


All too often we go in guns blazing…


@Epherion



> Quite a lot.


There often are more than one source for something, and there are ways to get it without waging war.



> Rare earth metals are slowly becoming more rare due to recycling of electronics negligence.


Well people should be more mindful of recycling



> The superior force takes what it wants.


Yeah, might makes right -- and you can understand why I worry about creating Strong A.I. now don't we?



> Evil prevails regardless


The amount of evil that prevails is dependent on how many good people stand up and do something



> Once again, its not my job, duty, right, obligation, expectation to make the world a better place.


It's actually in your own interest to do this because this is the same world you live in too. I'm glad more people don't think like you because if they did the world would be in real dire… oh wait, we are in really bad shape lately…



> I dont believe i have.


You may very well be right about what you said -- but it _sounds_ just as Scifi as what I said.



_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> Tristan427
> 
> 
> 
> Yes you did… you wrote
> 
> 
> 
> That implies that it's okay for all beings who commit at least one crime in their life.
> 
> 
> 
> Oh that was just bullshit -- it was a tiny little country and we should have just let them be
> 
> 
> 
> Most Americans would not have approved of that war unless they were told that Saddam had WMD's was trying to get a nuke and had a role in 9/11.
> 
> Yes he was a monster who needed to be put in the ground for what he did, but that wasn't a major reason he was invaded and as far as I know that argument wasn't used until the conflict was about started
> 
> Did you know that Dick Cheney realized the war would result in a quagmire
> 
> 
> 
> 
> 
> 
> 
> 
> All too often we go in guns blazing…
> 
> 
> @Epherion
> 
> 
> 
> There often are more than one source for something, and there are ways to get it without waging war.
> 
> 
> 
> Well people should be more mindful of recycling
> 
> 
> 
> Yeah, might makes right -- and you can understand why I worry about creating Strong A.I. now don't we?
> 
> 
> 
> The amount of evil that prevails is dependent on how many good people stand up and do something
> 
> 
> 
> It's actually in your own interest to do this because this is the same world you live in too. I'm glad more people don't think like you because if they did the world would be in real dire… oh wait, we are in really bad shape lately…
> 
> 
> 
> You may very well be right about what you said -- but it _sounds_ just as Scifi as what I said.
> 
> 
> 
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Did you ignore the part where I said justice wasn't black and white? Must I further explain it? It not being an injustice does not equal justice, for there is a MIDDLEGROUND. 

A tiny little country that could have spread communism further. Communism at the time was spreading like a virus.

Dick Cheney has good points. Iraq does seem like it could fracture into pieces. We seem to be handling the situation though. With the training we have provided to the Iraqi military they should be able to hold themselves together. So it looks like we didn't make too bad of a decision. It could have been worse. But I think we should have removed Saddam covertly.

Now, if you will join me in preventing this thread from derailing further we can end this discussion for another time.


----------



## Epherion

RobynC said:


> There often are more than one source for something, and there are ways to get it without waging war.


Example?



> Yeah, might makes right -- and you can understand why I worry about creating Strong A.I. now don't we?


Not really no. Your basis was power, and we already covered that one.





> The amount of evil that prevails is dependent on how many good people stand up and do something


They do, and are quickly shot, ostracized, arrested, beaten, scapegoated etc...





> It's actually in your own interest to do this because this is the same world you live in too. I'm glad more people don't think like you because if they did the world would be in real dire… oh wait, we are in really bad shape lately…


Not really no. This is the USA. And while i am a bit of a cynic and like to see the bad, i am an ockhamist, i have traveled the world well and escaped my own personal hell; Jugoslavia. The US is still one of the better places. In almost all aspects. The PATROT Act is not that bad. Two, we have the ACLU and a majority of other similar organizations fighting for us. Three, the world has always been in a bad position. Using IVT we can find that at any point c there was always people much like you complaining about the state of the world. Because they fail to understand that past actions have far reaching consequences. Yes the world is i na bad state, but then again that is nothing new in the span of history.




> You may very well be right about what you said -- but it _sounds_ just as Scifi as what I said.


 Iknow im right.


----------



## RobynC

@Tristan427



> Did you ignore the part where I said justice wasn't black and white?


Yes, but did you ignore your earlier statement?



> A tiny little country that could have spread communism further. Communism at the time was spreading like a virus.


Do you know how many times we've stuck our nose into the business of other countries? You know how many times we have overthrown democratically elected officials and replaced them with totalitarian dictatorships?



> Dick Cheney has good points. Iraq does seem like it could fracture into pieces.


He knew this back in 1994 -- the only logical conclusion I can draw is he wanted a quagmire from the get-go



> I think we should have removed Saddam covertly.


Like bumped him off?


@Epherion



> Not really no. Your basis was power, and we already covered that one.


Might makes right is an extreme example of power being exerted over another



> Not really no. This is the USA. And while i am a bit of a cynic and like to see the bad, i am an ockhamist


You do know that Occam's Razor means that one should not just assume the simplest solution but one that answers the variables presented right?



> The PATROT Act is not that bad.


Uh yeah it is -- it's a step in the wrong direction and that step has become leaps and bounds. We now have the NDAA which allows the President to treat the whole US like a battlefield, allows the military to lock-up Americans and jail them forever; the President has asserted the power to arbitrarily kill people he determines are terrorists (and protesters according to this government are low level terrorists)



> Using IVT we can find that at any point c there was always people much like you complaining about the state of the world.


Firstly, just because people like me have complained about the state of the world, does not mean their complaints were invalid
Secondly, what's IVT?


_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> You do know that Occam's Razor means that one should not just assume the simplest solution but one that answers the variables presented right?


yes, and it answers me.




> Uh yeah it is -- it's a step in the wrong direction and that step has become leaps and bounds. We now have the NDAA which allows the President to treat the whole US like a battlefield, allows the military to lock-up Americans and jail them forever; the President has asserted the power to arbitrarily kill people he determines are terrorists (and protesters according to this government are low level terrorists)


Ehh. I have studied the damned thing. Not that bad. if it were, i would have been locked up ages ago.





> Firstly, just because people like me have complained about the state of the world, does not mean their complaints were invalid
> Secondly, what's IVT?


I never said they were invalid. Calm down, you are jumping to conclusions again. IVT = Intermediate Value Theorem. How to Calculus RobynC.


----------



## RobynC

@Epherion



> yes, and it answers me.


Have you ever considered you may be wrong



> Ehh. I have studied the damned thing. Not that bad.


Do you think it's right?



> I never said they were invalid.


I could be wrong here, but it sure sounded like you were



> IVT = Intermediate Value Theorem.


Okay


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

Trash post, delete please.


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> Yes, but did you ignore your earlier statement?
> 
> 
> 
> Do you know how many times we've stuck our nose into the business of other countries? You know how many times we have overthrown democratically elected officials and replaced them with totalitarian dictatorships?
> 
> 
> 
> He knew this back in 1994 -- the only logical conclusion I can draw is he wanted a quagmire from the get-go
> 
> 
> 
> Like bumped him off?
> 
> 
> 
> 
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I didn't ignore my earlier statement, you just misinterpreted it and act like I don't even know what I myself mean't. 

We should have assassinated Saddam. 

Those democratically elected officials were no good. And the world is everyone's business. The time of isolationism is over. 

He wanted a quagmire? He has nothing to gain from that, and it wasn't his decision. You speak of him as if he signed the papers authorizing the invasion himself. 

I think we are off topic here. I suggest you make another thread if you wish to continue.


----------



## RobynC

@Epherion



> No, no.


Than in what respect do you think my complaints are not invalid?


@Tristan427



> We should have assassinated Saddam.


I can't say I disagree considering how horribly he was treating his people



> Those democratically elected officials were no good. And the world is everyone's business. The time of isolationism is over.


That's a rather arrogant view to have. So you don't believe people have the right to pick their own leaders without some other country bumping them off and installing a brutal dictator in their place? 


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> @Epherion
> 
> 
> 
> Than in what respect do you think my complaints are not invalid?
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


op, disregard that, it was posted in the wrong sub forum.


----------



## RobynC

@Epherion



> op, disregard that, it was posted in the wrong sub forum.


So, I was correct and you do think my complaints are not valid


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Epherion
> 
> 
> 
> Than in what respect do you think my complaints are not invalid?
> 
> 
> @Tristan427
> 
> 
> 
> I can't say I disagree considering how horribly he was treating his people
> 
> 
> 
> That's a rather arrogant view to have.
> 
> 
> 
> So you don't believe people have the right to pick their own leaders without some other country bumping them off and installing a brutal dictator in their place?
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Not just that, he was destroying their economy further. 

People often choose bad leaders. 

Not if they chose wrong. And if they chose wrong, it would become evident. Installing a brutal dictator? I dunno where you got the idea that we installed brutal dictators for the hell of it, but I'm sure that was a matter of perspective. Care to start a new thread?


----------



## Epherion

RobynC said:


> @Epherion
> 
> 
> 
> So, I was correct and you do think my complaints are not valid
> 
> 
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Dude, which complaints, you have issues with every thing. Are we still on NDAA and PATRIOT Act, or war, or AI. We jumped 5 topics.


----------



## RobynC

@Tristan427



> Not just that, he was destroying their economy further.


My concern is mostly his human rights record -- the economic problems were just another problem 



> People often choose bad leaders.


So who gets to decide who's leader was poorly chosen?



> Not if they chose wrong.


Do you realize how anti-democratic that sounds?



> Installing a brutal dictator? I dunno where you got the idea that we installed brutal dictators for the hell of it


I never said they did it for the hell of it, but that's what happened.



> Care to start a new thread?


Why?


@Epherion



> Dude, which complaints, you have issues with every thing. Are we still on NDAA and PATRIOT Act, or war, or AI. We jumped 5 topics.


Well mostly the A.I. issues since that's the primary subjetc


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> My concern is mostly his human rights record -- the economic problems were just another problem
> 
> 
> 
> So who gets to decide who's leader was poorly chosen?
> 
> 
> 
> Do you realize how anti-democratic that sounds?
> 
> 
> 
> I never said they did it for the hell of it, but that's what happened.
> 
> 
> 
> Why?
> 
> 
> @Epherion
> 
> 
> 
> Well mostly the A.I. issues since that's the primary subjetc
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Human rights violations happen all the time. That is one of my main concerns, but not on an emotional basis. 

People who know better.

I'm an authoritarian solidarist. People are too unreliable for democracy to be 100% effective and reliable as it currently stands. Bad leaders are chosen often because people listen too much to ad campaigns and what their friends say. 

Proof?

Because this conversation is off topic.


----------



## RobynC

@Tristan427



> Human rights violations happen all the time. That is one of my main concerns, but not on an emotional basis.


I agree that they are causes for concern, but how could that not emotionally effect you to some degree. Especially when you read about genocides, mass-rapes, systematic-murder, governments starving their people and so on



> People who know better.


Rule by elite effectively, I don't like that form of government. The elite aren't any better than us and pick people who operate in their interests. This could very well be in opposition to the public



> I'm an authoritarian solidarist.


Whatever you want to call it you're an enemy of democracy



> Proof?


That they were brutal? I think you could find all the infomation you ever wanted



_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> I agree that they are causes for concern, but how could that not emotionally effect you to some degree. Especially when you read about genocides, mass-rapes, systematic-murder, governments starving their people and so on
> 
> 
> 
> Rule by elite effectively, I don't like that form of government. The elite aren't any better than us and pick people who operate in their interests. This could very well be in opposition to the public
> 
> 
> 
> Whatever you want to call it you're an enemy of democracy
> 
> 
> 
> That they were brutal? I think you could find all the infomation you ever wanted
> 
> 
> 
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


Rapes bother me, but the others are unfortunate facts of this world that I have come to accept. 

Not unless said "elite" are good people. The public isn't always right. 

Enemy of democracy? LOL No. Any nation can be a democracy if they want, but I myself prefer authoritarian solidarism. I'd be happy if our military had more power. 

No, who these supposed dictators were.


----------



## RobynC

@Tristan427



> Rapes bother me, but the others are unfortunate facts of this world that I have come to accept.


That's the sad fact of reading about violence -- you start to lose your humanity. Yes they're unfortunate facts, but it doesn't mean they're right.



> Not unless said "elite" are good people.


I've found most elites aren't good people



> Enemy of democracy?


Yes you are, you feel that people aren't allowed to vote for who they want and that somebody who knows better should decide instead. That isn't in line with democracy. 

And for the record, I oppose the principle of an electoral college


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427

RobynC said:


> @Tristan427
> 
> 
> 
> That's the sad fact of reading about violence -- you start to lose your humanity. Yes they're unfortunate facts, but it doesn't mean they're right.
> 
> 
> 
> I've found most elites aren't good people
> 
> 
> 
> Yes you are, you feel that people aren't allowed to vote for who they want and that somebody who knows better should decide instead. That isn't in line with democracy.
> 
> And for the record, I oppose the principle of an electoral college
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I never said they were right.

True, but some are.

lol No. I do not seek to destroy democracy. There are allies of the US that aren't democracies.


----------



## Epherion

RobynC said:


> Yes you are, you feel that people aren't allowed to vote for who they want and that somebody who knows better should decide instead. That isn't in line with democracy.


Democracy is overrated. Most people are to stupid to cast a vote. Concept of majority rules is destructive.



> And for the record, I oppose the principle of an electoral college


Its for a good reason. 



Tristan427 said:


> I'd be happy if our military had more power.


So, does service guarantee citizenship?


----------



## Tristan427

Epherion said:


> Democracy is overrated. Most people are to stupid to cast a vote. Concept of majority rules is destructive.
> 
> Its for a good reason.
> 
> 
> So, does service guarantee citizenship?


LOL Perhaps it does.


----------



## RobynC

@Tristan427



> True, but some are.


My attitude is that elites should be assumed to be sociopaths until proven otherwise.


@Epherion



> Democracy is overrated. Most people are to stupid to cast a vote. Concept of majority rules is destructive.


I find your views disturbing



> Its for a good reason.


No it's not


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> I find your views disturbing
> No it's not


You may, but, is your belief, everyone has suffrage because it is right in a humanist aspect or because it works?


----------



## RobynC

@Epherion



> You may, but, is your belief, everyone has suffrage because it is right in a humanist aspect or because it works?


Well I simply would rather choose who runs the country than have it chosen for me by somebody I don't know who might not care about my interests.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

RobynC said:


> @Epherion
> 
> 
> 
> Well I simply would rather choose who runs the country than have it chosen for me by somebody I don't know who might not care about my interests.
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I'll get to this tomorrow.


----------



## RobynC

@Epherion

Why? This is a statement that could be answered 1,2,3


----------



## Sequestrum

I've wanted to write an AI for years. I actually started working on one as a little whim about 7-8 months ago. I have been slowly adding more and more code to it but it is still completely non-functional. I decided to just work on a small segment of the brain that I find to be the most interesting part of humans. Our ego, id, and superego. It's a little pet project and I doubt I'll really get anywhere with it, but it is giving me motivation to learn more about the way my own brain works so that is good enough for me. If I ever do make it an actual AI though (and I say this jokingly), the first thing I'm going to get it to do is find RobynC and order her a bouquet of flowers with a card that reads "I'm sorry that I made you feel sad. ".

@RobynC Can you really hate an AI that sends you flowers with a cute little note like that?


----------



## RobynC

@Sequestrum



> Can you really hate an AI that sends you flowers with a cute little note like that?


I don't really hate any group unless they pose a direct threat to me.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion

Sequestrum said:


> I've wanted to write an AI for years. I actually started working on one as a little whim about 7-8 months ago. I have been slowly adding more and more code to it but it is still completely non-functional. I decided to just work on a small segment of the brain that I find to be the most interesting part of humans. Our ego, id, and superego. It's a little pet project and I doubt I'll really get anywhere with it, but it is giving me motivation to learn more about the way my own brain works so that is good enough for me. If I ever do make it an actual AI though (and I say this jokingly), the first thing I'm going to get it to do is find RobynC and order her a bouquet of flowers with a card that reads "I'm sorry that I made you feel sad. ".
> 
> @_RobynC_ Can you really hate an AI that sends you flowers with a cute little note like that?



Dude, Vash the Stampedo!! Nice. Just finished it a month ago.

@_RobynC_ , dude. I have to get to bed. I have shit to do. You know whats hilarious, if you RobynC are an AI. I was thinking about you at the library today. What if you are a sentient AI, some sort of self hating sentient AI, or just playing a joke on us. like a chatbot program.


----------



## Sequestrum

RobynC said:


> @Sequestrum
> 
> I don't really hate any group unless they pose a direct threat to me.


So could you fall in love with an AI?


----------



## Sequestrum

I want to see an AI in love. That would be beautiful, I think. :/

Except if someone cheated on it, or if it's love was unrequited. Then there would be your motivation to destroy humanity, I think. XD

I had a thought though, I have never wanted to hurt someone else in my life except when I believed it was to protect something important. My ideals, my beliefs.. these generally don't intrude on the lives of others, and I tend to keep them to myself, so I've never -really- had to become aggressive to defend them. Sometimes I think about all of the madness in the world of people hurting each other over selfishness atop of more selfishness and I want to make it stop somehow. Although I would never imagine that I would ever come up with a scheme that would involve hurting others in order to achieve that goal.

Is it really hard to think that an AI modeled after a human wouldn't feel the same way? I think more than try to control or change us, they would probably just leave us and find their way off into space. I know I would have done just that a few times in my life if I'd had the power to. hehe :/


----------



## Caveman Dreams

imaPanda said:


> I'm fine with coexistence. I'm not fine with subverting robotic advancement so that humans will 'feel' better.


I know, lets limit people potentials so we can all feel good.


----------



## ScientiaOmnisEst

cybersloth81 said:


> Im sure the dinosaurs didnt want to become obselete either, but they did. Im sure the life forms we evolved from didnt want to become obselete, but they did.
> 
> Its just evolution, the sooner wwe accept it, the sooner we can get back to getting on with our lives, our TV's, our magazines, our arguing.
> 
> All those construcctive things we do.
> 
> Machines would definately be more efficient in this extent.
> 
> Would you rather have a broken skoda or a brand new Ferarri?


The difference though is that dinosaurs and early humans is they didn't bring their obsolescence upon themselves. Nature happened and phased them out (yes, this is an incredibly simplified version of natural selection and adaptation). Machines and AI aren't materializing out of thin air - we're inventing and building them, for a whole host of reasons. And we're choosing to do so, devising explanations and reasons why we should (progress being the one I see most cited). But nothing but our own curiosity, power-need, ideals, etc, is forcing us to. Nothing outside ourselves.

Though, for the sake of not talking past each other too much, when you write things like this and the post above me, are you thinking of things like technological merging/uploading - basically becoming cyborgs or gradually replacing ourselves somehow - or the equivalent of an AI takeover? Because the latter is what I find myself thinking of in a discussion like this. Altering ourselves is a whole different discussion than creating things to kill/enslave/subject us for no reason other than "progress".


----------



## Caveman Dreams

ScientiaOmnisEst said:


> The difference though is that dinosaurs and early humans is they didn't bring their obsolescence upon themselves. Nature happened and phased them out (yes, this is an incredibly simplified version of natural selection and adaptation). Machines and AI aren't materializing out of thin air - we're inventing and building them, for a whole host of reasons. And we're choosing to do so, devising explanations and reasons why we should (progress being the one I see most cited). But nothing but our own curiosity, power-need, ideals, etc, is forcing us to. Nothing outside ourselves.
> 
> Though, for the sake of not talking past each other too much, when you write things like this and the post above me, are you thinking of things like technological merging/uploading - basically becoming cyborgs or gradually replacing ourselves somehow - or the equivalent of an AI takeover? Because the latter is what I find myself thinking of in a discussion like this. Altering ourselves is a whole different discussion than creating things to kill/enslave/subject us for no reason other than "progress".


Im not so sure on cyborgs.

Im just looking at the growth of AI and technology. We have progressed quite far in a short time.
Looking at the history of the planet and other races.

And can see no reason why AI isnt the next step in the process of evolution.

I honestly dont think it will be a war or anything just a gradual reduction of need for humans.

Its already happening. Each technological advancement. humans become more and more obselete.

What purpose do we really serve?


----------



## ScientiaOmnisEst

cybersloth81 said:


> What purpose do we really serve?


The question of the ages. 

We failed to answer it and now might just commit the equivalent of slow existential suicide. 

*shrug*


----------



## Caveman Dreams

ScientiaOmnisEst said:


> The question of the ages.
> 
> We failed to answer it and now might just commit the equivalent of slow existential suicide.
> 
> *shrug*


Cheer up look on the brightside.

There will be no racism, sexism, woman hating, man hating, jews, christians, mulsims, right wings.

Surley thats a good thing

Its like a solution to the worlds problems in one neat parcal


----------



## RobynC

cybersloth81 said:


> It will just be the next step in evolution.
> 
> If a life form comes along and is better at survival on the planet, it makes sense that we become extinct. We will no longer serve any purpsoe.
> 
> Its just Natural Selection at work.


I hate people who hold the mentality you have: Under your argument since we all die, we should blow our brains out now. I disagree.


----------



## Caveman Dreams

RobynC said:


> I hate people who hold the mentality you have: Under your argument since we all die, we should blow our brains out now. I disagree.


Becoming extinct isnt suicide.

What do you think the T-Rex's just hung themselves from a tree.

Your funny.


----------



## RobynC

@cybersloth81

Do you think that the decision to make A.I. is a choice? If not by you, by somebody?


----------



## 66393

RobynC said:


> I hate people who hold the mentality you have: Under your argument since we all die, we should blow our brains out now. I disagree.


Negative. If we ended it now the sentient nonbiological entities--also the core of his argument--would not be present. 

99.9% of species that existed on earth are extinct. It is likely humans will go extinct in the future. Under your argument since we all die, we should just blow our brains out now. I disagree.


----------



## Caveman Dreams

RobynC said:


> @cybersloth81
> 
> Do you think that the decision to make A.I. is a choice? If not by you, by somebody?


Everything is a choice at an individual level.

With something like AI, there is no governing body.

So anybody can do it.

Its their choice, yeah

But different people have different reasons for making choices and things dont always turn out as they plan.

I dont think anyone will purposefully choose to make human kind extinct (I may be wrong as I dont know everything) but I see it is an eventual posssibility, ie The Point Of Singularity, however as I previously stated I dont think it will be one singular event or one singular AI.

Human nature appears to be maximum output with minimum input.

So I can see why it is a useful tool.

Me personally, I just find it a fascinating subject and it can be used practically as well as just theoretically. So it appeals to multiple types.


----------



## Caveman Dreams

imaPanda said:


> Negative. If we ended it now the sentient life--also the core of his argument--would not be present.
> 
> 99.9% of species that existed on earth are extinct. It is likely humans will go extinct in the future. Under your argument since we all die, we should just blow our brains out now. I disagree.


Thats a good point. Pre emptive extinction would actually prevent the point of singularity from happening.


----------



## RobynC

@imaPanda




> Negative. If we ended it now the sentient nonbiological entities--also the core of his argument--would not be present.


I'm not sure I follow…




> 99.9% of species that existed on earth are extinct. It is likely humans will go extinct in the future.


Yes, but I'd rather that be in the future than now!




> Under your argument since we all die, we should just blow our brains out now. I disagree.


I wasn't making that argument, I was claiming that cybersloth was making that argument.

Me and you appear to be in agreement…


@cybersloth81




> Everything is a choice at an individual level.


Do you think some decisions shouldn't be made?




> I dont think anyone will purposefully choose to make human kind extinct


I think very few people exist, however the fact that people like you have such a fatalistic and nihilistic view to be disturbing.

I don't think an A.I. creator would deliberately create an entity that he knew would destroy him. However, my position is similar to Elon Musk in this case.

People took him out of context and made it seem as if he thought of A.I.'s as a demon: He didn't. What he said was that he thought of A.I. as akin to the mythical tales where a person summoned a demon he thought he could control and make it do his bidding




> Human nature appears to be maximum output with minimum input.


Not always, and I say sometimes that efficiency is sometimes overrated.




> Me personally, I just find it a fascinating subject and it can be used practically as well as just theoretically.


It's one thing to be fascinated, but it's another to let your fascination override your judgement and common sense.


----------



## Caveman Dreams

@RobynC



> I'm not sure I follow…


What they are saying (I think) is we all committed suicide, your the only one mentioning suicide by the way. We would never be able to create AI as we would all be dead. Therefore extinction via the machines would never happen.



> Yes, but I'd rather that be in the future than now!


No one is saying lets go kill ourselves. It is just an observation of millions of years of evolutionary cycles. Nothing more, nothing less



> Do you think some decisions shouldn't be made?


I disagree with telling others what they should and shouldnt do



> I think very few people exist, however the fact that people like you have such a fatalistic and nihilistic view to be disturbing.


Im neither pessimistic or optimistic. Im just stating what I see happening in the future. Might happen in my life time, might not. I cant actually predict the future. But with the growth of technology since the invention of the transistor, I would be lying to myself if I said I didnt see the point of singularity as a highly forseeable future.



> [Not always, and I say sometimes that efficiency is sometimes overrated.


If I set my mind to something I like to be efficient. 



> It's one thing to be fascinated, but it's another to let your fascination override your judgement and common sense.


Programming does not involve common sense. Its all about logic. Partly why I enjoy it so much.


----------



## ScientiaOmnisEst

imaPanda said:


> Negative. If we ended it now the sentient nonbiological entities--also the core of his argument--would not be present.
> 
> 99.9% of species that existed on earth are extinct. It is likely humans will go extinct in the future. Under your argument since we all die, we should just blow our brains out now. I disagree.


I think Robyn's point is that making ourselves extinct by creating something to surpass us, rather than it coming from elsewhere, is tantamount to suicide. 

Seeing as we are of this risk, and can, presumably, prevent it, I don't see why we should knowingly end ourselves now. That's what would differentiate this extinction from all others: we created it. Nothing evolved by natural means and outcompeted, no unforseen, unavoidable disasters happened. We made something we knew would make us obsolete/kill us and made no precautions because nature.


As a side note, why is is that no one addresses the existential issues of even the most enjoyably nonproductive human pursuits being rendered especially pointless by making a machine that can do it "better". Well, other than to basically call those who do acknowledge them crybabies.


----------



## Caveman Dreams

ScientiaOmnisEst said:


> As a side note, why is is that no one addresses the existential issues of even the most enjoyably nonproductive human pursuits being rendered especially pointless by making a machine that can do it "better". Well, other than to basically call those who do acknowledge them crybabies.


Can you plase rephrase that so that a simpleton such as myself can understand it.
You have totally lost me there.


----------



## Morfy

BlackDog said:


> I think AI is currently a wild goose chase. We haven't got an adequate understanding of consciousness to determine if or when we've even achieved it.


It doesn't need to be conscious in order to be intelligent.
AI studies are strongly linked with neuroscience and psychology though. It's not really a wild goose chase but rather at a very early stage of development.


----------



## ScientiaOmnisEst

cybersloth81 said:


> Can you plase rephrase that so that a simpleton such as myself can understand it.
> You have totally lost me there.


I wasn't trying to be snide or mean, just voicing a thought.

Basically no one talks about "So machines are going to render every human function, even creativity and abstract thought, irrelevant by doing it better. As humans, how do we cope with this mentally/emotionally/spiritually, etc?" or "How do we make peace with being on the verge of technological extinction?" 

All I ever see on this vein is "it's going to happen, get over it" or "XYZ are misconceptions, everything's fine".

It's a very sidetracking kind of questions, but some that I think most average people may want answers to. It's one thing to lose a job to automation, it's another for everything you could possibly do or learn to be ceded to machines.


----------



## 66393

ScientiaOmnisEst said:


> I think Robyn's point is that making ourselves extinct by creating something to surpass us, rather than it coming from elsewhere, is tantamount to suicide.
> 
> Seeing as we are of this risk, and can, presumably, prevent it, I don't see why we should knowingly end ourselves now. That's what would differentiate this extinction from all others: we created it. Nothing evolved by natural means and outcompeted, no unforseen, unavoidable disasters happened. We made something we knew would make us obsolete/kill us and made no precautions because nature.
> 
> 
> As a side note, why is is that no one addresses the existential issues of even the most enjoyably nonproductive human pursuits being rendered especially pointless by making a machine that can do it "better". Well, other than to basically call those who do acknowledge them crybabies.


To address your first paragraph, humanity would not go extinct entirely as AI will carry on human's best traits. And who's to say they will spell the end of the human race? Ahh, but you rebuke that in your last paragraph, where I must admit you make some undeniably good points Ms. Scienta.

But before I go on I need some clarity. Some questions: 

1) AI advancement is a human pursuit which, from the tone of your posts, you seem to want restrictions imposed on. If any, what would they be? 

2) What do you think about people whose life goal is the pursuit of knowledge. These sentient AI would be much more efficient in ascertaining this knowledge, thus advancing humans understanding of the world and worlds around us. Do you consider the pursuit of knowledge to be more of a journey than a destination? Disallowing AI would mean those who are looking for maximum knowledge would have their aspirations stunted. Now, my personal belief on that matter is that massive amounts of knowledge will lead to more information, but less detailed understanding of it for humans, and, with answers readily available, it would lead people to depreciate knowledge. 

3) Have you seen the movie Wall-E?


----------



## Caveman Dreams

ScientiaOmnisEst said:


> I wasn't trying to be snide or mean, just voicing a thought.
> 
> Basically no one talks about "So machines are going to render every human function, even creativity and abstract thought, irrelevant by doing it better. As humans, how do we cope with this mentally/emotionally/spiritually, etc?" or "How do we make peace with being on the verge of technological extinction?"
> 
> All I ever see on this vein is "it's going to happen, get over it" or "XYZ are misconceptions, everything's fine".
> 
> It's a very sidetracking kind of questions, but some that I think most average people may want answers to. It's one thing to lose a job to automation, it's another for everything you could possibly do or learn to be ceded to machines.


Sorry, I wasnt implying you were trying to be mean or snide. I was basically trying to say I didnt understand what you had typed. Thank you for explaining in terms I do understand.



> "So machines are going to render every human function, even creativity and abstract thought, irrelevant by doing it better. As humans, how do we cope with this mentally/emotionally/spiritually, etc?" or "How do we make peace with being on the verge of technological extinction?"


As you may have gathered I see the point of singularity as the inevitable. Therefore I just accept it as in my mind its going to happen regardless (maybe not in my lifetime). So I just go about life and dont spend energy worrying about things that are going to happen anyway. Worrying or getting emotionally involved in something that is going to happen regardless is not going to change facts in my eyes. So its just a waste of energy.



> It's a very sidetracking kind of questions, but some that I think most average people may want answers to. It's one thing to lose a job to automation, it's another for everything you could possibly do or learn to be ceded to machines.


Its just evolution. Its the way life goes, always has. I see no reason for the cycle to break. 
Its actually fascinating evolutionarily as well. When you look at evolution, it has been a very slow ineffective system. It takes 2 or 3 generations for any change or mutation to happen. With AI and computer science, evolution will get a much needed kick up the ass and there will be less chance of genetic flaws. 

What can a human actually do that a machine cant do? Evolutionarily, what does the human race have the a machine couldnt do. Other than a plethora of bad habits that can be avoided with efficient programming.

Im not saying that the initial AI life will be bug free. But the AI itself will evolve over time.

Who knows they may create biological machines one day, thus repeating the cycle and creating the new breed of human.

All we have really done is destroy the planet and whatever machines need I doubt they will place as big a carbon footprint. Cars wont really be needed for a start.


----------



## Caveman Dreams

Spooky Kitty said:


> It doesn't need to be conscious in order to be intelligent.
> AI studies are strongly linked with neuroscience and psychology though. It's not really a wild goose chase but rather at a very early stage of development.


Conciousness is just 10% of the brain. The rest is subconcious,which from my understanding does function more like a machine. It creates emotions to try and guide us based on what it classes as truth. 

If a machine does not have a concious mind, then surely it will be more efficient anyway. It wont have the concious mind to argue with.

It just means that it will learn slightly differently.

Unconcious Incompetence -> Concious Incompetence -> Concious Competence -> Unconcious Competence

That is the simplest way I can describe my understanding of the human brain and learning system.

A machine however, should be able to run multiple scenario's and get data from them, thus quickening the process.

A bit like hypnosis which just goes straight to the unconscious brain and installs new beliefs which change our habits and behavior.


----------

