# Human AI Theory About to be Tested



## wuliheron (Sep 5, 2011)

Computer scientist leads the way to the next revolution in artificial intelligence

While IBM has taken the brute force reductionist approach to reproducing much of the intelligence of the mammalian brain, these researchers have taken the more ethereal theoretical approach. Exactly what their theory is all about I haven't a clue, but now would be a good time for people to explain possibly before the computers start to explain to us themselves. :wink:


----------



## Pete The Lich (May 16, 2011)

FINNALLLYYY!!!

AI development needs to be fast tracked because computer evolution is exponentially faster than organic evolution

innovations would come out every day

also

*Cleverbot comes very close to passing the Turing Test*

_Techniche 2011, IIT Guwahati, India, 3rd September 2011_ 
A high-powered version of Cleverbot took part alongside humans in a formal Turing Test at the Techniche 2011 festival. The results from 1,334 votes were astonishing... 



Cleverbot was judged to be 59.3% human.

 
The humans in the event achieved just 63.3%.

 
"It's higher than even I was expecting, or even hoping for. The figures exceeded 50%, and you could say that's a pass. But 59% is not quite 63%, so there is still a difference between human and machine." _Rollo Carpenter


_
to answer _your _question the way it looks to me*"Each time a Super-Turing machine gets input it literally becomes a different machine," Siegelmann says.* "You don't want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you'd like one that can navigate in a *dynamic environment. *If you want a machine to interact successfully with a human partner, you'd like one that can *adapt *to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That's what this model can offer."
the program sees the environment = e (whatever it sees)
then when something changes its e = this
this = that
that = whatever
....
so its constantly changing comparing what was before to what it is now
navigating around things by comparing _its_ (the robot)position relative to other things in real time
in any environment while also comparing things to what it has already experienced
so if it sees a person in the store it knows people can move it will focus some attention on that object and predict some outcomes to avoid it to navigate a blind person around a grocery store

kind of like how an optical mouse functions
except a mouse is a "turing" example because it HAS to be on a 2D surface
if this AI were a mouse it could work on a 3D surface IE you could lift the mouse up and it would work fine

i think i explained that right...
im using firefox and i cant adjust the text box size so i cant see it as a whole :sad:


----------



## Kyrielle (Mar 12, 2012)

Cool. I want a droid for a friend.  Then again, I've been playing too much Star Wars...but really, how cool would it be to have an android as a friend? I would especially love to watch it learn and see how far its consciousness would go. A machine's perspective of the world, of beauty, and emotions would be fascinating.


----------



## Paradox of Vigor (Jul 7, 2010)

It sounds like we're one step closer to the creation of the geth, which is something that I have long awaited.


----------



## RobynC (Jun 10, 2011)

So basically this computer is based on a duplication of the human brain? Sure logically proceeding along this train of thought you could theoretically improve upon this and produce a brain better than any human being. 

The issue of course is should we? Looking at the described abilities, it would clearly be a very useful tool for some remarkably capable automated weapons and surveillance systems.

Of course anybody who sounds the alarms as to the danger strong AI poses will simply be ignored and labeled as being ignorant and some kind of luddite -- even despite the fact that computer experts such as Bill Joy have expressed serious worries about robotics and AI -- until eventually somebody creates an A.I. that could seriously endanger mankind.

I'd think it was kind of funny how such intelligent people can be so short-sighted, and how people who develop technologies to benefit mankind have no real objection to creating an artificially intelligent being that could potentially turn on mankind -- some even welcome it.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## RobynC (Jun 10, 2011)

I just thought about the testing mentioned in the article. They effectively talked about creating this entity and feeding it sensory data to see how it responds to conditions like our real brains do?

Isn't that sort of like creating a sentient being and putting it in a matrix-like scenario?


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## cynthiareza (Feb 26, 2012)

I am waiting for a Teddy Bear like this





 
he would be my best friend!


----------



## Psychosmurf (Aug 22, 2010)

I for one, welcome our Super-Turing overlords.


----------



## RobynC (Jun 10, 2011)

I think I've figured it out: A.I. developers are kind of like religious extremists who believe we're in the end times and have taken it upon themselves to accelerating things along. 

Of course they're not really religious, but they treat science as if it is a religion _(something it isn't and shouldn't be)_, and they do believe mankind's days are over and that they should accelerate along our demise..


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> I think I've figured it out: A*.I. developers are kind of like religious extremists *who believe we're in the end times and have taken it upon themselves to accelerating things along.
> Of course they're not really religious, *but they treat science as if it is a religion *_(something it isn't and shouldn't be)_, and they do believe mankind's days are over and that they should accelerate along our demise..


Transhumansits are the ones you are thinking of. A majority of them hold antihuman stances. My self included.


----------



## Wulfyn (May 22, 2010)

The question of should we is already being discussed, in depth, in another thread.


A friend of mine came up with an alternate to the Turing test (he has this weird mind that wants to reverse everything... I think that's why I like him). He said that the real benchmark of AI is not whether it is indistinguishable from a human in the turing test, but whether it can predict AI vs human better than humans can. That is to say can the AI tell that it is talking to a computer?


----------



## RobynC (Jun 10, 2011)

@Epherion



> Transhumansits are the ones you are thinking of. A majority of them hold antihuman stances.


To some extent, though most of them want to become one with their technology. There are some people, however, who work in the field of A.I. and robotics who do not seem to possess such ambitions but wish to develop strong A.I. at any cost -- even if it meant the destruction of mankind. Some are so enthralled in their curiosity that they either fail to appreciate the consequences or do not care for one reason or another. There are some who feel mankind's days are numbered and feel they should accelerate things along.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## geekofalltrades (Feb 8, 2012)

@RobynC

My friend, you are far, far too pessimistic. You seem to assume that any AI we could design would inherit all of the worst of humanity and none of the best. That it would evolve into a cold, psychopathic killer, and not possess anything human aside from cold logic. If we truly learn to mimic a human brain, then it will be a _healthy_ human brain, one with empathy and conscience. And even if it went wrong, all you have to do to stop it is pull the plug.


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> 
> 
> 
> To some extent, though most of them want to become one with their technology. There are some people, however, who work in the field of A.I. and robotics who do not seem to possess such ambitions but wish to develop strong A.I. at any cost -- even if it meant the destruction of mankind. Some are so enthralled in their curiosity that they either fail to appreciate the consequences or do not care for one reason or another. There are some who feel mankind's days are numbered and feel they should accelerate things along.


You are a bit of a doomsayer. Whats your take on Bill Joy. And what is your xp in AI?


----------



## RobynC (Jun 10, 2011)

@Epherion



> Whats your take on Bill Joy.


Bill Joy is a computer scientist who expresses worry about certain technologies including robotics and A.I., and the potential threats they pose to humanity. I don't think all technology is a threat to mankind, but I believe some technology is.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> 
> 
> 
> ...


Bill Joy is predominantly an SE. Its only due to that, that he expresses concern. Personally the death of humans due to machines is greatly exaggerated. McCarthy him self believes AI will come in 500 years.


----------



## RobynC (Jun 10, 2011)

@Epherion



> Its only due to that, that he expresses concern.


Yeah that's the reason -- look the fact is anybody who disagrees with you is either ignorant or a luddite...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @_Epherion_
> look the fact is anybody who disagrees with you is either ignorant or a luddite...


When have i said that? You are jumping to conclusions. Chill. I want to probe you for info. As in, what is your AI xp?


----------



## RobynC (Jun 10, 2011)

@Epherion



> When have i said that? You are jumping to conclusions.


Well you basically dismissed the concerns of a computer expert. Though he may have been predominantly SE oriented, he is not lacking in A.I. knowledge


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> 
> 
> 
> ...


Did not dismiss him. I said his concern for all G.N.R. is based on his xp in SE. While highly essential its not the only perspective on the matter. Have you heard of Jaron Lanier?

I re iterate, have you heard of the Red papers? You have habit of answering only parts of my questions.


----------



## Razare (Apr 21, 2009)

A Turing Machine was something I never got to in Comp. Sci.

They did mention Neural Networks!!! XD And those are cool.

I suppose the Turing Model is a way of refining the Neural Network, besides using the method I understand (Genetic Algorithms).

And apparently, she's found an improvement on the traditional Turing Model.

The only thing I don't understand about it, is how they define success?

When engineering a neural network, whether by Genetic Algorithms or other mathematical processes, you have to have a way of measuring success.

This tends to create a network that achieves a specific purpose very well. Their article made the AI they're developing sound very open-ended, but I have a hunch it wont be. Or if it were, at best, it would exhibit the ability to be trained (which would be no small feat). Though, I question how it would be trained. With animals of this world, their instincts give us a basis upon which to mold trained behavior.

A creature without instincts, even if you start dolling out rewards for behavior, I doubt it's going to work without some type of foundation to begin from.

I guess it could work if you programmed some innate instincts, which are essentially, innate measures of success. Breast feeding from a mother is something a baby exhibits. If a similar instinct could be fashioned into a trainable network, you might have something.

Let's make bets of how many years before we end up with Robot babies that we train into adult robots!


----------



## RobynC (Jun 10, 2011)

@Epherion



> Did not dismiss him. I said his concern for all G.N.R.


Sounded like a dismissal



> I said his concern for all G.N.R.


They are high risk technologies



> Have you heard of Jaron Lanier?


Not really



> have you heard of the Red papers?


I don't think so...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> They are high risk technologies
> Not really
> I don't think so...


Jaron Lanier:
Jaron Lanier - Wikipedia, the free encyclopedia

High risk or not, they will come about and are essential to the advancement of the human race. Are you really satisfied at where we are right now? They can be implemented properly, even Bull Joy says so. To draw a parallel, we are doing a good job of maintaining our nuclear arsenal, and ensuring nuclear weapons from proliferating. 


Ooops, I meant Red House Papers:
Prison Planet.com » Top Nazis Planned EU-Style Fourth Reich
The Red House Report - Wikipedia, the free encyclopedia


----------



## RobynC (Jun 10, 2011)

@Epherion



> High risk or not, they will come about


Maybe true, but because something will come about does not mean that it's good, and doesn't mean we should do everything to accelerate things along. This is what separates mainstream Christianity from the End Timers -- both acknowledge the end-times will come and the world will end eventually, but most Christians don't believe in actively doing what they can to further it along faster; the End Timers do.



> are essential to the advancement of the human race.


Why?



> Are you really satisfied at where we are right now?


Not in every respect, but you have to keep in mind some of the areas I'm unsatisfied with have nothing to do with A.I. -- mostly my issues pertain to civil rights, privacy, corporate and banking misconduct. There are certain medical advances that I think would greatly improve the human race such as stem-cell research, organ regeneration, and certain treatments to deadly diseases such as prion diseases. None of these things actually require artificial intelligence that are greater than that of humans.



> They can be implemented properly, even Bull Joy says so.


Actually he expressed worries of a technological "arms race" situation in which negative uses of the technology would have to be combatted by defenses against them. I should note each advance would occur faster than the previous one.



> To draw a parallel, we are doing a good job of maintaining our nuclear arsenal, and ensuring nuclear weapons from proliferating.


Apples and oranges -- a nuclear warhead cannot think. The missile that carries the warhead has some A.I. technology to allow it to navigate and find it's target.



> Ooops, I meant Red House Papers


While these are certainly disturbing, they don't particularly surprise me. There were a lot of bankers and businessmen who were sympathetic to the Nazi cause _(some members of the OSS such as Allen Dulles and Frank Wisner were as well)_ and we did recruit a lot of Nazi scientists as well as some members of the German intelligence _(some of which were known to have been involved in war-crimes)_.


R.C.

BTW: Normally when I read this tagline, I generally am sort of joking but this time I'm serious (especially all the OSS stuff): 

_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @_Epherion_
> 
> 
> 
> ...


Damn it I have to get to bed. I'll pick this up tomorrow. There are some discrepancies in your logic. You fail to see the benefits of GNR.



Razare said:


> A Turing Machine was something I never got to in Comp. Sci.


TM is only *a *means of achieving AI, not *the* means. Thats the current consensus of AI researchers.


----------



## RobynC (Jun 10, 2011)

@Epherion



> There are some discrepancies in your logic. You fail to see the benefits of GNR.


I never said these technologies didn't have some benefit -- I just think there needs to be limits placed on genetic engineering, nanotechnology, and robotics.

Intelligence enabled us to develop all these things; if we don't use our intelligence and our sense of ethics to manage them we can easily endanger ourselves, and potentially the world (particularly involving nano-tech)

R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> ; *if we don't use our intelligence and our sense of ethics to manage them we can easily endanger ourselves, and potentially the world (particularly involving nano-tech)*


What makes you think there are not people already attempting to do so.


----------



## RobynC (Jun 10, 2011)

@Epherion



> What makes you think there are not people already attempting to do so.


Misanthropic A.I. designers who have openly expressed that it would be pretty awesome if A.I. turned on mankind and destroyed us, and those who said it wouldn't be a bad thing if mankind was eradicated.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Razare (Apr 21, 2009)

RobynC said:


> @_Epherion_
> 
> 
> 
> Misanthropic A.I. designers who have openly expressed that it would be pretty awesome if A.I. turned on mankind and destroyed us, and those who said it wouldn't be a bad thing if mankind was eradicated.


Yeah, but what happens, happens. Better to focus on what we're given a say on in our own lives.

This world will end. Humanity will cease to be humans, one day. (We may become something else.) This Universe is in a constant state of change. Better to make a positive impact in what ways you can than worry about the many mishaps on the horizon. You might think warning people helps, but I don't think it does.

Most everyone doesn't realize their mistakes until they've committed them. The more humans you put into the equation, the bigger the mistakes get, and the more in denial we are about those mistakes until they manifest a negative consequence. After the negative consequence, humanity begins to learn.

Honestly, I don't see AI presenting the 100% doomsday scenario as some described. Assume we develop AI more intelligent than us... well modern science is suggesting that cooperation actually derives superior outcomes than isolationism and conflict. AI would need a modus operandi, just like any other being. How does it define right and wrong?

You see, we presume that because it's AI, it's going to have all the answers. The future is never certain, and the number of variables that play into any situation are infinite. AI could not reason or predict with absolute precision, just as we cannot. Thus their reasoning would have to be principal-based, rather than utilitarian-based.

Their principals would likely be based upon what principals have proven to work in the past. The very same principals most humans agree work. "No murder, violence is bad, let's cooperate... ect." It's just that we have a hard time following our own principals as a species, even if most of us agree upon the principals.

If we do AI correctly, it'll likely have a lot of the same problems that we do. It's superiority would be it could easily do traditional computer stuff, since it's conciousness could directly interface.

We'll interface our consciousness with computer components in the future too (it's already begun)... then basically it's an even-playing field.


----------



## RobynC (Jun 10, 2011)

@Razare



> We'll interface our consciousness with computer components in the future too (it's already begun)... then basically it's an even-playing field.


As computers advance, eventually the organic components of us will eventually be the limiting factor and will be discarded. You guys like to think given the choice of a good or bad outcome, the outcome will most likely be good; I assume given the choice between a good or bad outcome, nature tends to choose the bad.



> This world will end. Humanity will cease to be humans, one day.


I understand that, but I'd rather it occur later than sooner. That's my issue, you have people doing everything they can to make it occur sooner than later.



> Most everyone doesn't realize their mistakes until they've committed them. The more humans you put into the equation, the bigger the mistakes get, and the more in denial we are about those mistakes until they manifest a negative consequence. After the negative consequence, humanity begins to learn.


And if I'm right there won't be a next time. 

Why does a crisis have to occur before people do anything? 


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Razare (Apr 21, 2009)

RobynC said:


> @_Razare_
> As computers advance, eventually the organic components of us will eventually be the limiting factor and will be discarded. You guys like to think given the choice of a good or bad outcome, the outcome will most likely be good; I assume given the choice between a good or bad outcome, nature tends to choose the bad.


Part of my spiritual beliefs, have made me more aware of the value of our organic components.

This belief is that souls exist. If we presume for a moment that they do in fact exist (whether you think they do or not), then what would that mean? It would mean that in order for your soul to have any impact upon your physical existence, there would need to be an interface between the physical and spiritual. Without a link, what good would a soul be? It couldn't influence our decisions, or take part in our consciousnesses.

The link between spiritual and physical was discovered/revealed to the ancient civilizations of the East. The oldest record of this knowledge is in the Upanishads. They're called Chakra centers. Each center corresponds to a physical location along the spine. They link spiritual and physical.

Assuming we have souls, AI wont have this developed link between spiritual and physical. Not that I think AI would lack in spirit, but I would equate their spiritual awareness to that of a metal. There's spiritual awareness in everything, just in different degrees. The spiritual awareness of a human soul is something that took ages to develop and can only be interfaced into an appropriate vessel.

In the future, I believe we'll be able to better develop our spiritual awareness to help our planet and ourselves. It's superior to that of the worldly technology that we fashion. So I would not throw it away for the sake of a machine body, I don't think the people of the future will either. Nor do I see how a machine form is superior.

In rare experiences I've had, I've known things that couldn't have been known, and I've seen things that aren't seen. Should we develop to such a degree that we manifest that quality on a daily basis in all of man, I fail to see how a machine could reach 10% of our potential. Calculations without spiritual guidance will always be constrained to physical perceptions and data.

Machine reasoning would lack in hope, resolve, miracles, courage, and what have you. When we point to the greatest moments of mankind, we find some innate human quality being expressed, that only we express. Sure, we can sink to great depths as well, but our potential is higher because of it. The potential of machine reasoning and action is finite.

But it's obviously a religious argument!  Sorry. Logically, I could very well see them destroying mankind, I just don't believe them to be "superior" and I believe, were such a catastrophe to begin, it would incur divine guidance and humans would make it through, somehow.


----------



## RobynC (Jun 10, 2011)

@Razare



> Part of my spiritual beliefs, have made me more aware of the value of our organic components.
> 
> This belief is that souls exist. If we presume for a moment that they do in fact exist (whether you think they do or not), then what would that mean? It would mean that in order for your soul to have any impact upon your physical existence, there would need to be an interface between the physical and spiritual. Without a link, what good would a soul be? It couldn't influence our decisions, or take part in our consciousnesses.


That's a big supposition to make. I don't believe in souls so you can see why my point of view is very different.



> But it's obviously a religious argument!  Sorry. Logically, I could very well see them destroying mankind


That's kind of the problem -- the logical side of the argument



> believe, were such a catastrophe to begin, it would incur divine guidance and humans would make it through, somehow.


I don't think it's a good idea to rely on miracles...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> 
> 
> 
> ...


Hmm, i presume you have evidence?


----------



## RobynC (Jun 10, 2011)

@Epherion

In the past couple of threads that's come up


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> In the past couple of threads that's come up
> 
> 
> R.C.
> _Remember to seriously read my signature down below and be sure you understand what I mean by it..._


I meant links and things.


----------



## RobynC (Jun 10, 2011)

@Epherion

Google is your friend

BTW: Is that avatar character wearing an SS Uniform?

R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> 
> Google is your friend
> 
> ...


No they are not, google is chaotic. Secondly, i should not have to do your debating for you. Also, google is a rudimentary AI, btw.
And yes, its Renate Richter. 








^ former avy of mine as well. Attractive, yes?


----------



## RobynC (Jun 10, 2011)

@Epherion



> No they are not, google is chaotic. Secondly, i should not have to do your debating for you.


I simply felt it would be easier for either one of us to use the search engines



> Also, google is a rudimentary AI, btw.


I'm aware of that, though I should note that I'm not technically opposed to all A.I. -- I'm oposed to A.I. that can near or exceed human intelligence.



> And yes, its Renate Richter.
> View attachment 34666


And you don't understand how this could be in bad taste?



> Attractive, yes?


I'm not attracted by Nazi's. I'm Jewish you know...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> I simply felt it would be easier for either one of us to use the search engines


If im going to take you seriously you should do your own leg work. 



> And you don't understand how this could be in bad taste?


Bad taste how? Its from the movie Iron Sky, the character is not even based on a real person. The actress playing her is German though. 
Iron Sky: Watch the Official Theatrical Trailer!
Secondly, Mel Brooks, The Producers, his work, that was hysterical. 



> I'm not attracted by Nazi's. I'm Jewish you know...


I would know that how? You are also bi, while i dont know your phenotype preferences, i assumed you might like her. Even if she is German. I do.


----------



## RobynC (Jun 10, 2011)

@Epherion



> Bad taste how?


Uh, she's a Nazi. Sure the character's fictional, but the character is a fictional SS character (and evidently high ranking)



> I would know that how?


Uh, posting pictures of Nazi's real or fictional is generally bad form -- I'm not saying you should be barred from doing so, but it's just bad taste.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> Uh, she's a Nazi. Sure the character's fictional, but the character is a fictional SS character (and evidently high ranking)
> Uh, posting pictures of Nazi's real or fictional is generally bad form -- I'm not saying you should be barred from doing so, but it's just bad taste.


I try and strike an objective path in life, my avy is a representation of a trait of mine, alliances, allegiances, political parties, war criminals and all that dont really factor in just the person. i choose the avy because, she was attractive, commanding, and well dressed. The blond hair suites her well, although i prefer black hair. How about these three:






















Also, not that high ranking. She has a degree in earthology, but is a schoolteacher.


----------



## RobynC (Jun 10, 2011)

@Epherion



> war criminals and all that dont really factor in just the person.


They should -- the Nazi's were repulsive.



> Also, not that high ranking.


I read a lot about the Nazi's if I recall right that's a Standartenfurher which is a Colonel -- pretty high up


_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> @Epherion
> They should -- the Nazi's were repulsive.



As was Milosevic, Pol Pot, Stalin, Chairman Mao, etc... The Nazis genocide was not that impressive. I believe Stalin and Mao had higher scores. And yet, we dont talk much about them and their crimes. 



> I read a lot about the Nazi's if I recall right that's a Standartenfurher which is a Colonel -- pretty high up


I'll take your word for it, dont know much about their rankings.


----------



## RobynC (Jun 10, 2011)

@Epherion



> As was Milosevic, Pol Pot, Stalin, Chairman Mao


Agreed



> The Nazis genocide was not that impressive.


It still was an attempted genocide



> I believe Stalin and Mao had higher scores.


Scores? This isn't a computer game -- these were people's lives that were brutally extinguished.



> And yet, we dont talk much about them and their crimes.


I'm well aware of Stalin and Mao's atrocities...


_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> Scores? This isn't a computer game -- these were people's lives that were brutally extinguished.


Well, that depends. Steven Wolfram believes the world is a computer and many CS/SE trans humanist also believe so. So one could argue this is all a simulation. Welcome to the matrix. 





> I'm well aware of Stalin and Mao's atrocities...


Most people are not.


----------



## RobynC (Jun 10, 2011)

Epherion



> Well, that depends. Steven Wolfram believes the world is a computer


As I understand it he believes the laws of physics result in the universe being analogous to principles of holography, not actually a holodeck simulation.



> and many CS/SE trans humanist also believe so.


That is a controversial belief, though it doesn't make it impossible.



> So one could argue this is all a simulation. Welcome to the matrix.


Maybe it is, but at this point it's mental masturbation. 



> Most people are not.


It's sad that most people lack a good knowledge of history. I actually find history interesting and I read a lot. 


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427 (Dec 9, 2011)

RobynC said:


> So basically this computer is based on a duplication of the human brain? Sure logically proceeding along this train of thought you could theoretically improve upon this and produce a brain better than any human being.
> 
> The issue of course is should we? Looking at the described abilities, it would clearly be a very useful tool for some remarkably capable automated weapons and surveillance systems.
> 
> ...


Short sighted? How can the short sighted themselves decide who is short sighted? And now you compare scientists to extremists who want to fast track the end times? My Lord...

On topic: About time we started getting closer to making AI. I would love to have an AI as a friend. Eventually we might be able to make them portable, and that would be awesome. Perhaps download a Cortana skin? Or EDI?


----------



## RobynC (Jun 10, 2011)

@Tristan427



> Short sighted? How can the short sighted themselves decide who is short sighted?


I'm not as short sighted as you'd think mate...



> And now you compare scientists to extremists who want to fast track the end times?


Well I'm not talking about all scientists, I'm talking about scientists who work in robotics. They do treat science as if it were a religion, they at the very least are indifferent to the extinction of the human race, and feel that despite the danger they should accelerate along the development of strong A.I. and we should welcome our successors in the way that ****-erectus welcomed us.



> About time we started getting closer to making AI.


Why does it mean so much to create a strong AI?



> I would love to have an AI as a friend.


Isn't that it's choice to make not yours?


R.C. 
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Tristan427 (Dec 9, 2011)

RobynC said:


> @Tristan427
> 
> 
> 
> ...


Pshh, they do it out of curiosity and wanting to advance technology. They treat science with respect, not reverence. And AI's won't have access to everything. It's not like they'll get our nukes through the internet. Nuclear bomb sites are on their own networks. No hacking. 

They don't research AI's and think " This might destroy humanity...meh. " No. They think " What fascinating research this is. This could have so many applications. " 

Because AI's can have so many uses, and we'll have another race of beings to communicate with. We are all machines. We are organic machines, they will be synthetic.

Actually no, a friendship takes two people.


----------



## RobynC (Jun 10, 2011)

@Tristan427



> they do it out of curiosity


Does every desire have to be entertained?



> wanting to advance technology


Do you think it's good to advance technology simply for the sake of it, or for a good purpose? Personally I prefer there being some purpose behind it



> They don't research AI's and think " This might destroy humanity...meh. " No. They think " What fascinating research this is. This could have so many applications. "


They understand the risk is there, and they either ignore it out of arrogance or out of their overriding interest. This would be fine except they're playing Russian roulette with 7 billion people.

And since you consider 7 billion deaths insignificant -- what number do you consider significant?


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> And since you consider 7 billion deaths insignificant -- what number do you consider significant?


Significance is subjective. 7 billion is large number, and makes for wonderful statistical modeling. But in answer to your question, half the galaxy would impress me.


----------



## RobynC (Jun 10, 2011)

@Epherion

I guess you've simply become immune to the effects of death and suffering and can simply use large numbers to diminish the significance. You're truly a human who has no humanity, no regard for human suffering. You even admitted you have anti-human attitudes.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._[/QUOTE]


----------



## Epherion (Aug 23, 2011)

RobynC said:


> I guess you've simply become immune to the effects of death and suffering and can simply use large numbers to diminish the significance.* You're truly a human who has no humanity, no regard for human suffering.* *You even admitted you have anti-human attitudes.*


No madam, im just an INTJ. And to an extent yes, a turbulent childhood. Its gotta be well up there for me to care. 

Only to an extent, this guy however:
Pentti Linkola - Wikipedia, the free encyclopedia


----------



## RobynC (Jun 10, 2011)

@Epherion



> No madam, im just an INTJ


That's a very convenient excuse



> And to an extent yes, a turbulent childhood.


That makes a lot of sense to me. I had a bad childhood but I didn't lose all my humanity, I don't have an anti-human attitudes and I do have a regard for human suffering.



> Its gotta be well up there for me to care.


Insensitive basically?



> Only to an extent


You said earlier:



> Transhumansits are the ones you are thinking of. A majority of them hold antihuman stances. My self included.


Which indicates you do hold anti-human views



> this guy however:
> Pentti Linkola - Wikipedia, the free encyclopedia


His views aren't much different than yours.


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> I don't have an anti-human attitudes and I do have a regard for human suffering.


Suffering is part of the Human Condition(tm)



> Insensitive basically?


Pretty much.




> Which indicates you do hold anti-human views


I do. But there is an extent. 



> His views aren't much different than yours.


As much as i like explosions, im not in particular favor of nuclear bombardment of a city. 
Existential fallacy. You assume all anti humans share the same end goal; eradication or pop control. I do not.


----------



## RobynC (Jun 10, 2011)

@Epherion



> Suffering is part of the Human Condition(tm)


Yes it is, but it's no excuse not to care



> Pretty much.


That's not a virtue you know...



> I do. But there is an extent.


Well you said it would be insignificant if 7 billion people vanished so what extent?



> As much as i like explosions, im not in particular favor of nuclear bombardment of a city.


That guy favors bombing humanity to size?



> You assume all anti humans share the same end goal; eradication or pop control. I do not.


Actually that isn't it...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._[/QUOTE]


----------



## Epherion (Aug 23, 2011)

RobynC said:


> That's not a virtue you know...


Fail to see what you are getting at here?





> Well you said it would be insignificant if 7 billion people vanished so what extent?


On an individual level, i can care. But a large horde of people loses its humanity and becomes a collective. A drop of rain in the storm if you will. They are a collective, the individual is gone, and as such their death is the end of the collective. 
Do you feel sorrow, sadness, grief, regret, when you destroy an ant colony? 





> That guy favors bombing humanity to size?


 Mainly bombing cities and releasing nerve agents and mass sterilization programs. 




> Actually that isn't it...


Its what i inferred.


----------



## RobynC (Jun 10, 2011)

> Fail to see what you are getting at here?


That wasn't something hidden in meaning -- insensitivity is not a virtue.



> On an individual level, i can care. But a large horde of people loses its humanity and becomes a collective.


Yeah but in that horde lie individuals...



> Do you feel sorrow, sadness, grief, regret, when you destroy an ant colony?


Do you feel sorrow, sadness, grief, or regret when you kill an ant? 



> Mainly bombing cities and releasing nerve agents and mass sterilization programs.


I'm opposed to depopulation efforts


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------



## Epherion (Aug 23, 2011)

RobynC said:


> That wasn't something hidden in meaning -- insensitivity is not a virtue.


What makes you think im virtuous? 





> Yeah but in that horde lie individuals...


Nope, they gave up their individuality for altruism. They are now one. 




> Do you feel sorrow, sadness, grief, or regret when you kill an ant?


No, its an insect. This same principle can be applied to a human collective. 




> I'm opposed to depopulation efforts


As am i. But unless something is done we will be killing each other for butter. If we can not limit our pop then we have to leave the planet.


----------



## RobynC (Jun 10, 2011)

@Epherion



> What makes you think im virtuous?


Nothing



> Nope, they gave up their individuality for altruism. They are now one.


How come then some people even in a group act in their own interest?



> No, its an insect.


Well what if an A.I. existed that was so intelligent that we were insects in comparison to it? Do you think it would feel sorry for killing one of us or a couple of us?



> This same principle can be applied to a human collective.


Perhaps you didn't notice the fine distinction. I did not say ant colony -- I said ant.



> But unless something is done we will be killing each other for butter.


I assume you meant for the better... that means you feel it to be beneficial for man to be exterminated...


R.C.
_Remember to seriously read my signature down below and be sure you understand what I mean by it..._


----------

