# IBM Has Achieved Simulation of Cat Brain



## RobynC

@Arbite

You think there should be no rules governing the conduct of scientific research?


----------



## Arbite

RobynC said:


> @Arbite
> 
> You think there should be no rules governing the conduct of scientific research?


No, I simple believe that when people say that science needs to consider ethics in its research it is asinine and stupid. Now, I don't believe that scientists should be amoral people who experiment on children. Morality is different to ethics.


----------



## RobynC

@Arbite



> Now, I don't believe that scientists should be amoral people who experiment on children. Morality is different to ethics.


But ethical guidelines are created to prevent against things such as amoral people from experimenting on children.


----------



## niss

Science is a field of study and cannot have ethics or morals. The individuals working in that field of study must have morals and be guided by a code of ethics or you will have people exploiting the field and others for their own personal gain.


----------



## Arbite

RobynC said:


> @Arbite
> 
> 
> 
> But ethical guidelines are created to prevent against things such as amoral people from experimenting on children.



*Laws *were created to stop people from doing things like that. Also child murder isn't ethics, it's morals. Ethical guidelines have held back things such as stem cells and cloning for years.


----------



## niss

Arbite said:


> *Laws *were created to stop people from doing things like that. Also child murder isn't ethics, it's morals. Ethical guidelines have held back things such as stem cells and cloning for years.


You don't have to go so far as murder or children to cross ethical and moral boundaries. Guatemala syphilis experiment - Wikipedia, the free encyclopedia


----------



## Arbite

niss said:


> You don't have to go so far as murder or children to cross ethical and moral boundaries. Guatemala syphilis experiment - Wikipedia, the free encyclopedia


It was a convenient example. I was simply shocked at OP's original statement that the scientists were operating without regard to ethics because they had simply set up a brain simulation. When in fact OP had no real knowledge on the subject.


----------



## absentminded

RobynC said:


> Yes, but the simulation would most likely include an adult human brain which most certainly is sentient.


As I have said twice now, right now they are only simulating electrical impulses between neurons. They are not taking neuroglia into account and they are not considering differences in brain chemistry.

Second, because they are only simulating the electrical pulses, they are only storing the information that currently saturated the brain at the the time they were copying it. The computers *can't* develop any kind of consciousness because *they aren't programed to do anything with the impulses they are sending*.

At the absolute most, they've only succeeded in storing a human mind as raw data. The mind does not feel anything, recognize that it exists, et cetera because the CPU's are not performing any operations on the signals being sent and received. They are only replicating what was actually in the brain as they copied the neuronal firing patterns.


----------



## RobynC

@Arbite



> It was a convenient example.


It may be a convenient example, but it's a valid one, albeit extreme.


@absentminded



> As I have said twice now, right now they are only simulating electrical impulses between neurons.


Electrical impulses are signals between brain cells -- the signals between each other, and other sections of neurons, as well as feedback loops between are responsible for consciousness. The chemical reactions are what allow these electrical impulses to take place



> Second, because they are only simulating the electrical pulses, they are only storing the information that currently saturated the brain at the the time they were copying it. The computers *can't* develop any kind of consciousness because *they aren't programed to do anything with the impulses they are sending*.


Okay, if they were programmed to do something with the impulses being sent, would you then see the moral/ethical issues?


----------



## Aether

RobynC said:


> *IBM Has Achieved Simulation of Cat Brain*
> 
> This is horribly disturbing on so many levels and strikes me as the product of a bunch of intelligent but mad-scientists operating without regard for ethics. A sufficiently detailed simulation of a brain is tantamount to a brain as conscious is an emergent phenomenon of properly configured neural activity.
> 
> To make it worse, they plan to create a simulation of a human brain in 2018.
> 
> As far as I'm concerned, ethical guidelines are not being kept in pace with scientific advancement. Experimenting on this is tantamount to torture, shutting it down is tantamount to murder.


This argument is horribly flawed on so many levels. I kid, the only flaw is where you equate a simulation with sentience. If what you're saying is true we'd have self aware AI by now..and we don't.

Just wait for quantum computation, if you think this is bad then I doubt you'll be very happy when computers more powerful than the human brain become the norm because if anything at all, they'll be what powers the first self aware AI. Personally I can't wait.

As for ethics and morality, I think what Arbite may be getting at is how a framework of morals put together by a society (that's pretty much ethics right?) is too easily allowed to block the progress of scientific advancement when often the field in regard gives rise to moral questions neither party knows how to answer. Then all the while they're bickering about who's right about what, people are losing their lives. Or I may have gotten the wrong end of the stick and am going off at a bit of tangent here.

Stem cell research is a good example. Think of how far we could have gotten by now if it weren't for the "OH but it's a loss of potential life!!" stuff. It was banned because those who it seems are more concerned with potential life than real life got their way. Thank fuck Obama straightened that out. From what I remember people were even against taking the cells from an already dead fetus. Like what the fuck else are you gonna do with it?


----------



## niss

Aether said:


> Stem cell research is a good example. Think of how far we could have gotten by now if it weren't for the "OH but it's a loss of potential life!!" stuff.


Therein lies the problem. Thankfully, we realize that as a society we aren't one dimensional, desiring progress without consideration of other important items, such as cost, feasibility, morals, ethics, politics, etc.


----------



## Aether

niss said:


> Therein lies the problem. Thankfully, we realize that as a society we aren't one dimensional, desiring progress without consideration of other important items, such as cost, feasibility, morals, ethics, politics, etc.


When did I say these things shouldn't be considered? My point is often the ignorance of the masses hinders the progress of what benefits the masses. Funny that.


----------



## niss

Aether said:


> When did I say these things shouldn't be considered? My point is often the ignorance of the masses hinders the progress of what benefits the masses. Funny that.


Where did I say that you said those things shouldn't be considered? My point is that considering all of the implications relative to the regulation of progress is a good thing.


----------



## snail

Since there are so many unknowns about consciousness, I agree with the OP that the experiment would be unethical if there were any real possibility that the brain simulation could become capable of consciousness. I also agree that morality/ethics are important regardless of what a person is doing, perhaps more in science than in any other field, because of how much harm can be caused if such concerns are neglected.

Personally, I believe that the mind/soul exists independently from the brain, and merely acts upon it rather than being created by it, but since there is no way to prove it, I wouldn't even consider taking a risk where, if I was wrong, something could suffer. I think the uncertainty is precisely what makes it necessary to exercise caution.


----------



## Aether

niss said:


> Where did I say that you said those things shouldn't be considered? My point is that considering all of the implications relative to the regulation of progress is a good thing.


Sorry kind of assumed since you were quoting me, saying therein lies the problem then going on to say thankfully this isn't the case. I can see how I've misunderstood now though so nevermind.

I get what you're saying it's a good thing in how it roots out the bad that can come of science but if we spend too much time considering these things like in the case of stem cell research we're just shooting ourselves in the foot really.



snail said:


> Since there are so many unknowns about consciousness, I agree with the OP that the experiment would be unethical if there were any real possibility that the brain simulation could become capable of consciousness. I also agree that morality/ethics are important regardless of what a person is doing, perhaps more in science than in any other field, because of how much harm can be caused if such concerns are neglected.
> 
> Personally, I believe that the mind/soul exists independently from the brain, and merely acts upon it rather than being created by it, but since there is no way to prove it, I wouldn't even consider taking a risk where, if I was wrong, something could suffer. I think the uncertainty is precisely what makes it necessary to exercise caution.


Good points but imagine this. Say at some point we _are_ able to create a computerised consciousness - something that thinks it exists and a group of maverick scientists go ahead and create it without anyone catching wind of it and with no regard for ethics or anything of the like. They learn to communicate with it. It turns out it wants to exist and it feels confused, but ok. It _thanks_ the scientists for producing its existence. If those scientists went by what other people told them to do - that creating consciousness is unethical and "playing God" and all that other crap a self aware AI would never have been created and a _radically _different future would play out from that point.

Suddenly people's opinions of right and wrong have had more of an impact on the human race than ever before. Not only would they be in direct conflict with the consciousness' potential wishes - potential wishes they think they're protecting - but they've also managed to stop a branch of science right in it's path. A branch that could result in the actualisations of what are right now simply dreams for our race. Immortality or at least extremely significantly increased lifespans. The ability to meet other intelligent races. The ability to expand and accelerate our brain power to ridiculous levels.


----------



## snail

Aether said:


> Sorry kind of assumed since you were quoting me, saying therein lies the problem then going on to say thankfully this isn't the case. I can see how I've misunderstood now though so nevermind.
> 
> I get what you're saying it's a good thing in how it roots out the bad that can come of science but if we spend too much time considering these things like in the case of stem cell research we're just shooting ourselves in the foot really.
> 
> 
> 
> Good points but imagine this. Say at some point we _are_ able to create a computerised consciousness - something that thinks it exists and a group of maverick scientists go ahead and create it without anyone catching wind of it and with no regard for ethics or anything of the like. They learn to communicate with it. It turns out it wants to exist and it feels confused, but ok. It _thanks_ the scientists for producing its existence. If those scientists went by what other people told them to do - that creating consciousness is unethical and "playing God" and all that other crap a self aware AI would never have been created and a _radically _different future would play out from that point.
> 
> Suddenly people's opinions of right and wrong have had more of an impact on the human race than ever before. Not only would they be in direct conflict with the consciousness' potential wishes - potential wishes they think they're protecting - but they've also managed to stop a branch of science right in it's path. A branch that could result in the actualisations of what are right now simply dreams for our race. Immortality or at least extremely significantly increased lifespans. The ability to meet other intelligent races. The ability to expand and accelerate our brain power to ridiculous levels.


In our current state, where our technology surpasses our spirituality, I consider a best-case scenario highly unlikely. If scientists created something highly advanced that had a will, and which did not feel harmed or violated by existing, there is a much higher probability that we would find a way to use it for military purposes rather than for enlightenment. Even if it were used for the purposes you mention, some are controversial and would require another thread to discuss in depth.


----------



## Aether

snail said:


> In our current state, where our technology surpasses our spirituality, I consider a best-case scenario highly unlikely. If scientists created something highly advanced that had a will, and which did not feel harmed or violated by existing, there is a much higher probability that we would find a way to use it for military purposes rather than for enlightenment. Even if it were used for the purposes you mention, some are controversial and would require another thread to discuss in depth.


Well then let's hope when this technology is mastered we're all well in touch with our spirits and no one's looking out for just number 1 anymore.

But yeah you're probably right - if it landed in our hands in our current state things might not go so well - and it would be an interesting discussion to be had but I think you may be missing the point. The result of this technology isn't what I intended to discuss, it's the funny way in which something can be morally good one minute then wrong the next and the advancement of our species' goals is what pays the price. You (and I imagine many others) say the experiment would be unethical but it is reasonable to imagine that the result of the experiment disagrees - which is what ethics are made to protect in this context in the first place. It's a bit of a paradox, no?


----------



## RobynC

@niss



> The individuals working in that field of study must have morals and be guided by a code of ethics or you will have people exploiting the field and others for their own personal gain.


Correct


@Arbite



> Laws were created to stop people from doing things like that. Also child murder isn't ethics, it's morals.


We can spend all day going around in circles on semantics if you want but you can't tell me it's wrong to have rules governing the conduct of scientific research and experimentation. 


@niss



> You don't have to go so far as murder or children to cross ethical and moral boundaries. Guatemala syphilis experiment - Wikipedia, the free encyclopedia


This is what I mean with the need for ethical and moral boundaries to exist in scientific research.


@Aether



> Stem cell research is a good example. Think of how far we could have gotten by now if it weren't for the "OH but it's a loss of potential life!!" stuff.


I never said I opposed stem cell research. In fact I'm in favor of such research. 

And before we go in circles about whether this is a logical conflict -- A stem-cell is not sentient, these cells have not differentiated into the various types seen throughout the human body. Without a nervous system you cannot have sentience.



> From what I remember people were even against taking the cells from an already dead fetus. Like what the fuck else are you gonna do with it?


Well the worry is that it would somehow encourage women to have abortions, but it's a ridiculous argument as I don't know any woman who would have an abortion for that purpose.


@niss 



> Thankfully, we realize that as a society we aren't one dimensional, desiring progress without consideration of other important items, such as cost, feasibility, morals, ethics, politics, etc.


Yeah. Progress is only good when the goal is good -- many people forget this and think progress is always a good thing -- it isn't. Progress is defined as "to advance as towards a goal". It does not specify what the goal is -- one has to keep that in mind as well as the potential results that may come of things failing to work as planned.



> My point is that considering all of the implications relative to the regulation of progress is a good thing.


Correct, and with science advancing at such an unprecedented pace, it is very important to be mindful of the need to seriously consider where our actions are taking us -- short sighted thinking could have very serious consequences.


@snail



> Since there are so many unknowns about consciousness, I agree with the OP that the experiment would be unethical if there were any real possibility that the brain simulation could become capable of consciousness.


Exactly, I would prefer to err on the side of caution in this particular case.



> I also agree that morality/ethics are important regardless of what a person is doing, perhaps more in science than in any other field, because of how much harm can be caused if such concerns are neglected.


Correct, especially depending on the particulars of the experiment performed.



> I think the uncertainty is precisely what makes it necessary to exercise caution.


Correct.


@Aether



> Good points but imagine this. Say at some point we _are_ able to create a computerised consciousness - something that thinks it exists and a group of maverick scientists go ahead and create it without anyone catching wind of it and with no regard for ethics or anything of the like. They learn to communicate with it. It turns out it wants to exist and it feels confused, but ok. It _thanks_ the scientists for producing its existence.


And what if it wants to exist and wishes to choose it's own destiny and live out its life as it wishes -- except that can't be done? Isn't that profoundly tragic?


@snail



> I consider a best-case scenario highly unlikely.


Agreed


----------



## Aether

RobynC said:


> I never said I opposed stem cell research. In fact I'm in favor of such research.
> 
> And before we go in circles about whether this is a logical conflict -- A stem-cell is not sentient, these cells have not differentiated into the various types seen throughout the human body. Without a nervous system you cannot have sentience.


Not really what I was getting at. Just pointing out how ethical concerns got in the way of years of research that could possibly have been saving lives right now.



RobynC said:


> And what if it wants to exist and wishes to choose it's own destiny and live out its life as it wishes -- except that can't be done? Isn't that profoundly tragic?


Download the consciousness into a body identical to that of a human. There you go, no limitations whatsoever - although if I were the AI i'd want to be uploaded onto some kind of network where I could access all the information ever stored by the human race, be able to produce any environment I wanted etc. Would I be complaining? Would I think that was tragic? Fuck no. Obviously whether we'll have the technology to do all that or any of this is debatable - but then this is all speculation. 

Remember all this speculation is just to prove my main point that basically, a lot of the time we just don't know what we're talking about when it comes to right and wrong - and when that confusion and ignorance gets in the way of science, it pisses me off lol, to put it bluntly. Of course we should walk before we run, but when human stupidity forces us to walk...It's annoying.


----------



## Arbite

RobynC said:


> @Arbite
> 
> 
> We can spend all day going around in circles on semantics if you want but you can't tell me it's wrong to have rules governing the conduct of scientific research and experimentation.


I can say it's wrong, but it isn't 
I'm saying that ethical policies have restrained scientific progress, when in reality they were not needed. A graduate physics student friend of mine had to go through an ethics commity to get an experiment of his approved, and he was designing an experiment on electromagnetism, with no testing on anything live. The process delayed his experiment by four months. Why the hell is an ethics commity saying what inert materials he can put in his magnet?

I'm making it clear, I don't think scientists should be completely unrestriced, but I think that the current rules are simply ridiculous.


----------



## wuliheron

RobynC said:


> @
> [MENTION=26687]wuliheron
> 
> As I understand it a memristor is a type of resistor that can vary it's own resistance and remember the resistance it last had.


Yes, resistance goes up and down according to the direction of the current and it retains whatever resistance it had last after the current stops. This can be a nonlinear process and memristors intersecting each other in a "crossbar latch" configuration can then be programmed to form a molecular scale transistor.


----------



## Aether

RobynC said:


> Aether said:
> 
> 
> 
> Download the consciousness into a body identical to that of a human.
> 
> 
> 
> 
> How do you know that's even possible? Depending on the particulars of the simulation, that may not be possible.
Click to expand...

Yeah because being able to simulate a real consciousness makes all the sense in the world? _If_ we can create a digital consciousness it's likely we'd eventually be able to transfer it like you would do with any data. 



> Would I be complaining? Would I think that was tragic? Fuck no.
> 
> 
> 
> That's your opinion. You have to keep in mind not everybody is you.
Click to expand...

No shit. Tragedy is based on perception and is thus opinonated to begin. Something I think more than a few ethically concerned people don't seem to understand.



> Of course we should walk before we run, but when human stupidity forces us to walk...It's annoying.
> 
> 
> 
> But you have to keep in mind I'm not talking about stupidity, I'm talking about ethical concerns here. Being ethical isn't necessarily stupid -- in fact if we had a whole world that operated without ethics we would have destroyed ourselves a long time ago.
Click to expand...

I don't think science should operate without ethics. I just think that certain concerns should be set aside because often they're based on intangible philisophical matters that can't be resolved in an objective manner.


----------



## RobynC

@Arbite



> Legislated ethics are either a joke, or a detriment. Things such as anti abortion laws or the prevention of using embryonic stem cells.


Well, I believe that anti-abortion laws and prevention of using embryonic stem cells are stupid; I certainly do not believe all legislated ethics are bad. Certainly making murder, rape, and slavery crimes are completely reasonable.



> Even then, morality is only really useful from an evolutionary standpoint.


I fail to understand what's wrong with that. There's nothing wrong with ensuring our survival within reason.


@wuliheron



> Yes, resistance goes up and down according to the direction of the current and it retains whatever resistance it had last after the current stops.


So the direction of the current is the sole factor which varies the resistance?



> This can be a nonlinear process and memristors intersecting each other in a "crossbar latch" configuration can then be programmed to form a molecular scale transistor.


So in this case they're useful for their extremely small size


@Aether



> _If_ we can create a digital consciousness it's likely we'd eventually be able to transfer it like you would do with any data.


Well if it's a sim character -- but if hardware actually performs the function of a given group of neurons, that becomes a serious technical issue



> No shit. Tragedy is based on perception and is thus opinonated to begin. Something I think more than a few ethically concerned people don't seem to understand.


That was kind of the point _I_ was making. What you might think was fine _could_ be considered tragic to somebody else.



> I don't think science should operate without ethics.


I agree with you there



> I just think that certain concerns should be set aside because often they're based on intangible philisophical matters that can't be resolved in an objective manner.


Well that's not exactly true. We're learning a lot about neuroscience currently, and the more we know the sooner we will know what produces consciousness. I think it's kind of dangerous to start creating simulations like this if we aren't reasonably sure that we're not creating a sentient being for the purpose of a science experiment.


----------



## wuliheron

RobynC said:


> So the direction of the current is the sole factor which varies the resistance?


The technology is new and there is more then one type of memristor just as there is more then one type of transistor. The ones created thus far can also be influenced by the voltage or even the spin of electrons (spintronic memristors). I'm not familiar with these particular memristors, but spintronics are still very experimental and these I would guess work on current and voltage.


----------



## MiriMiriAru

RobynC said:


> I'm not anti-science first of all, I'm not a fear-monger, and I'm not opposed to creating life; I'm opposed to creating sentient life solely for the purpose of experimentation.


I wasn't trying to imply _you_ are an anti-science fear-monger.

What they created was in no way sentient.


----------



## Arbite

RobynC said:


> Well, I believe that anti-abortion laws and prevention of using embryonic stem cells are stupid; I certainly do not believe all legislated ethics are bad. Certainly making murder, rape, and slavery crimes are completely reasonable.


Murder, rape and slavery, moral. It's like banging your head against a brick wall.

MORALITY IS NOT THE SAME AS ETHICALITY





RobynC said:


> I fail to understand what's wrong with that. There's nothing wrong with ensuring our survival within reason.


Within reason? So theres a point that we shouldn't ensure our own survival?


----------



## RobynC

@wuliheron



> The technology is new and there is more then one type of memristor just as there is more then one type of transistor. The ones created thus far can also be influenced by the voltage or even the spin of electrons (spintronic memristors). I'm not familiar with these particular memristors, but spintronics are still very experimental and these I would guess work on current and voltage.


Understood


@Arbite



> It's like banging your head against a brick wall.
> 
> MORALITY IS NOT THE SAME AS ETHICALITY


Look, there are overlaps between ethics and morals. For example one definition of ethics is: A theory or a system of _moral_ values. If you want to call it morals, I'll say morals to humor you so we don't go around in endless circles.


----------



## sprinkles

Arbite said:


> Within reason? So theres a point that we shouldn't ensure our own survival?


Within reason, no, since all points could be reasoned. 

Without reason, also no, since without reason, there _are no_ points.


----------



## absentminded

sprinkles said:


> Within reason, no, since all points could be reasoned.
> 
> Without reason, also no, since without reason, there _are no_ points.


This.

/thread


----------



## RobynC

@Arbite



> Within reason? So theres a point that we shouldn't ensure our own survival?


I've just never been all that fond of saying "By any means necessary".


----------



## Valdyr

I'll just post some quick thoughts.

1. This IS a huge step towards simulated minds IF the latter is possible. The big question is if such things are possible, or at all probable.

2. This itself is probably not a simulated mind even if such things are possible. It is not a detailed enough model, huge an advance though it may be.

3. The question of whether such things are possible is not one that can be simply answered by physicalism or non-physicalism concerning the mind (true metaphysical dualism regarding the mind is next to dead in academic philosophy, with the notable exception of David Chalmers, and with good reason). There is the question of reductive versus non-reductive physicalism. Perhaps mind is, as some have said, an emergent phenomenon and only supervenes on physical brain states but isn't reducible to them. Maybe it is reducible, but is substrate-dependent. Regardless, this is a philosophical minefield, but one that should be examined very closely considering the consequences if simulated minds ARE possible/at all likely (which would involve substrate-independent functionalism).

4. If simulated minds are possible, we have two concerns we must take EXTREMELY seriously. First are the ethical/moral (I am using the terms interchangeably, as do most philosophers who aren't Bernard Williams or virtue theorists) issues. In my view, a simulated mind would still be a moral patient, and we would still have a duty to treat it morally. And if the minds were complex enough, a system of political/legal rights and the like would have to be devised to avoid their oppression. Second, there is a pragmatic question. A simulated mind is one approach to strong AI. But if strong AI is possible, then we have to watch out for the possibility of _unfriendly_ strong AI, these being simulated minds which are not sympathetic to human concerns/concerned with good treatment of moral patients in general. Being actively malicious would be unnecessary - a given function or process could simply go off the rails and cause the AI to want to accomplish a goal at all costs. Even worse would be unfriendly seed AI, which would be simulated minds capable of recursive self-improvement. An unfriendly seed AI would be catastrophic. 

To summarize, even if simulated minds are ultimately impossible, given the potential ramifications involved, I think it would be irresponsible to simply dismiss the issue as science fiction/totally wild and not pursue further research into, for example, how to keep simulated minds friendly, how to stop a seed AI in theory, what a legal framework for simulated minds might look like, etc.


----------



## wuliheron

1) We already simulate minds. The Turing test has been used to fool even experts for years.

2) This is merely one of the first baby steps towards AI that approaches that of humans.

3) You don't need metaphysics to make evaluations of a system. In fact, the whole idea that metaphysics of some sort must apply to anything is debatable. 

4) We can speculate about the moral implications of unicorns, fairies, and UFOs but I don't see it as being particularly productive.


----------



## Valdyr

wuliheron said:


> 1) We already simulate minds. The Turing test has been used to fool even experts for years.


I disagree. The Turing test may test for "intelligence" in a loose sense, but not "minds" in the broad sense.



> 2) This is merely one of the first baby steps towards AI that approaches that of humans.


Agreed, but it's an important step nonetheless.



> 3) You don't need metaphysics to make evaluations of a system. In fact, the whole idea that metaphysics of some sort must apply to anything is debatable.


Why? We're asking what exactly a "mind" is and what its properties are - a theory of mind. I'm aware of 20th-century skepticism of metaphysics, but I'm not sure what else you'd call it when you debate possible worlds semantics/necessity and contingency, causality, etc. Logical positivism was important, but I think philosophers like Kripke have shown that some sort of metaphysics is necessary. I won't debate that post-Wittgenstein metaphysics look little like those of Aristotle, because I don't disagree.



> 4) We can speculate about the moral implications of unicorns, fairies, and UFOs but I don't see it as being particularly productive.


Those aren't productive because I have little reason to believe they're likely. I think the possibility of strong AI is enough to warrant serious consideration of the impacts, even if the chances still favor it _not_ being possible.


----------



## wuliheron

Valdyr said:


> Why? We're asking what exactly a "mind" is and what its properties are - a theory of mind. I'm aware of 20th-century skepticism of metaphysics, but I'm not sure what else you'd call it when you debate possible worlds semantics/necessity and contingency, causality, etc. Logical positivism was important, but I think philosophers like Kripke have shown that some sort of metaphysics is necessary. I won't debate that post-Wittgenstein metaphysics look little like those of Aristotle, because I don't disagree.


Wittgenstein didn't have any metaphysics nor do modern approaches like Relational Frame Theory which is the only one thus far to successfully bridge the cognitive and behavioral sciences. Instead of randomly choosing a metaphysics, creating a model, and then comparing it with empirical data we can extrapolate straight from the empirical data. Instead of demanding that a single definition of "mind" fit all situations, we can demonstrate how the definition changes according to the context. In general the more complexity the system has the more advantageous it is to use systems science instead.



Valdyr said:


> Those aren't productive because I have little reason to believe they're likely. I think the possibility of strong AI is enough to warrant serious consideration of the impacts, even if the chances still favor it _not_ being possible.


And I'm saying the same thing. We have little reason to believe any one-size-fits-all definition of "mind" is likely and, therefore, even less reason to speculate wildly about the ethics of such a thing.


----------



## Valdyr

wuliheron said:


> Wittgenstein didn't have any metaphysics nor do modern approaches like Relational Frame Theory which is the only one thus far to successfully bridge the cognitive and behavioral sciences.


I know Wittgenstein didn't have a metaphysics. That _was_ his influence on metaphysics.



> Instead of randomly choosing a metaphysics, creating a model, and then comparing it with empirical data we can extrapolate straight from the empirical data. Instead of demanding that a single definition of "mind" fit all situations, we can demonstrate how the definition changes according to the context. In general the more complexity the system has the more advantageous it is to use systems science instead.


I'll try to avoid turning this into a metaphysics thread, but I think that constitutes a substantive metaphysical theory. To show what I mean, even if there isn't one sort of structure that is a "mind," the theory that there are no such real patterns that can be generalized is itself a substantive metaphysical theory, because it contains implicit notions about what sorts of things can be real (making it an ontological theory at that). We can't escape metaphysics.



> And I'm saying the same thing. We have little reason to believe any one-size-fits-all definition of "mind" is likely and, therefore, even less reason to speculate wildly about the ethics of such a thing.


Not having a one-sized-fits-all definition is not the same as having absolutely no idea what something is or how it works. The point is that if any of these machines could, say, have subjectivity, they would qualify as persons and therefore moral patients. How essentialist we are about the "mind" is completely irrelevant to whether we could have machines deserving of moral status.


----------



## wuliheron

Valdyr said:


> I'll try to avoid turning this into a metaphysics thread, but I think that constitutes a substantive metaphysical theory. To show what I mean, even if there isn't one sort of structure that is a "mind," the theory that there are no such real patterns that can be generalized is itself a substantive metaphysical theory, because it contains implicit notions about what sorts of things can be real (making it an ontological theory at that). We can't escape metaphysics.


You've got the wrong idea. Contextualism and systems theories have nothing whatsoever to say about metaphysics either positive or negative. They don't claim that there isn't some "ultimate" definition of mind (whatever that might mean!), merely that they can demonstrate the meaning of the term changes with the context. Its the pragmatic approach of focusing on what is demonstrable and ignoring a lot of speculation. As far as people like Wittgenstein are concerned such speculation is mysticism and might be perfectly useful for personal growth or spirituality, but of limited use in academic philosophy and the sciences.



Valdyr said:


> Not having a one-sized-fits-all definition is not the same as having absolutely no idea what something is or how it works. The point is that if any of these machines could, say, have subjectivity, they would qualify as persons and therefore moral patients. How essentialist we are about the "mind" is completely irrelevant to whether we could have machines deserving of moral status.


That's either a mystical assertion or an expression of support for meta-ethics. Again, Contextualism and systems theory have nothing to say about mysticism either pro or con. 

As for meta-ethics, for all we know rocks have minds and we could speculate endlessly about the morality of how we treat rocks with minds and what kind of minds various types of rocks have. It might sound silly to some, but many cultures believe everything is alive and conscious and we could speculate endlessly. For a Pragmatic Contextualist, the only thing that determines how valid such an assumption might be is how useful it is in any given context. Thus the emphasis is on meta-ethics and what is demonstrable rather then ethical speculation. 

Its along the lines of give a man a fish and feed him for a meal, teach him how to fish and you feed him for a lifetime. The reasoning is that if you learn how to make ethical decisions then you'll be able to do so as you become aware of new ethical dilemmas.


----------



## absentminded

double post


----------



## absentminded

*Sees that @Valdyr commented*


----------



## Valdyr

wuliheron said:


> You've got the wrong idea. Contextualism and systems theories have nothing whatsoever to say about metaphysics either positive or negative.


What do we disagree about then? I'll do the metaphysics, you do the systems theory. 

They don't claim that there isn't some "ultimate" definition of mind (whatever that might mean!),[/quote]

But that seems to be what _you_ have a problem with. Correct me if I'm wrong on this. 



> merely that they can demonstrate the meaning of the term changes with the context. Its the pragmatic approach of focusing on what is demonstrable and ignoring a lot of speculation. As far as people like Wittgenstein are concerned such speculation is mysticism and might be perfectly useful for personal growth or spirituality, but of limited use in academic philosophy and the sciences.


I don't agree with Wittgenstein on this matter, but I don't want to turn the purpose of the thread into debating whether metaphysics is possible.



> That's either a mystical assertion or an expression of support for meta-ethics. Again, Contextualism and systems theory have nothing to say about mysticism either pro or con.


How is it "mysticism?" Mysticism is the attempt to commune with a divine reality, some spiritual truth, God etc. through the use of "alternative" epistemic routes like "intuition."



> As for meta-ethics, for all we know rocks have minds and we could speculate endlessly about the morality of how we treat rocks with minds and what kind of minds various types of rocks have.


Metaethics is concerned with the foundations of morality - whether moral sentences can be true or false, whether the source of value (if it exists) is internal or external, etc. This is an instance of applied ethics.



> It might sound silly to some, but many cultures believe everything is alive and conscious and we could speculate endlessly.


Sure we could, but we don't have very good reasons to believe those theories from the standpoints of inference to the best explanation, explanatory power, etc. This reeks of logical positivism - that the only things which can be true are either immediately empirically verifiable or are analytically true. 

Just because both theories are speculative in nature doesn't mean they are _equal_. We're fairly certain, from our science and our philosophical arguments, that rocks aren't the sorts of things with mental attributes, no matter how many cultures believe so. No good explanation has been proposed as to how a rock could have a mind, or why it is that from the reality we experience, rocks having minds is a good explanation for their behavior. The debate, for example, between reductive and non-reductive physicalists about the mind are not in the same class as essentially random assertions. Can all facts about mental states be reducible to purely physical facts? This is a question worth asking, because they are competing explanations for the reality we experience (mental lives). They are not the only explanations, nor the whole explanation.



> For a Pragmatic Contextualist, the only thing that determines how valid such an assumption might be is how useful it is in any given context. Thus the emphasis is on meta-ethics and what is demonstrable rather then ethical speculation.


The whole point of "ethical speculation" is so that we aren't caught off guard if the (very possible) event occurs. I don't care what pragmatic contextualism thinks. The idea is "if minds, whatever they are, can be conscious, then they demand moral treatment, and we should think about it now so as not to trip over ourselves." 

These are questions of ethics and philosophy of mind, but I don't agree with pragmatic contextualism as a philosophy of science.



> Its along the lines of give a man a fish and feed him for a meal, teach him how to fish and you feed him for a lifetime. The reasoning is that if you learn how to make ethical decisions then you'll be able to do so as you become aware of new ethical dilemmas.


This sounds like virtue ethics to me. All I'm saying is that it's safer to ask the question sooner than later.


----------



## RobynC

@Valdyr



> 1. This IS a huge step towards simulated minds IF the latter is possible. The big question is if such things are possible, or at all probable.


I never said it wasn't impressive. I simply said that I thought that it was unethical to create one.



> 2. This itself is probably not a simulated mind even if such things are possible. It is not a detailed enough model, huge an advance though it may be.


I don't really know how detailed it has to be for consciousness to actually emerge. If consciousness emerges, then it is detailed enough. Regardless, I hope you're right that this simulation isn't accurate enough.



> 3. The question of whether such things are possible is not one that can be simply answered by physicalism or non-physicalism concerning the mind (true metaphysical dualism regarding the mind is next to dead in academic philosophy, with the notable exception of David Chalmers, and with good reason). There is the question of reductive versus non-reductive physicalism. Perhaps mind is, as some have said, an emergent phenomenon and only supervenes on physical brain states but isn't reducible to them. Maybe it is reducible, but is substrate-dependent. Regardless, this is a philosophical minefield, but one that should be examined very closely considering the consequences if simulated minds ARE possible/at all likely (which would involve substrate-independent functionalism).


My assumption is humans are sentient due to the configuration and functioning of their brains. I simply assume that if you duplicate it to a sufficient degree, you'll get a similar result.



> 4. If simulated minds are possible, we have two concerns we must take EXTREMELY seriously. First are the ethical/moral (I am using the terms interchangeably, as do most philosophers who aren't Bernard Williams or virtue theorists) issues. In my view, a simulated mind would still be a moral patient, and we would still have a duty to treat it morally.


Correct



> And if the minds were complex enough, a system of political/legal rights and the like would have to be devised to avoid their oppression.


Which is entirely correct. There is also another issue: What differentiates an an artificially intelligent and sentient being from a naturally intelligent and sentient slave?

As an interesting note, the word Robot was actually a term coined by a Czech playwright which was either a slang-term or some kind of contraction or acronym for slave-labor.



> Second, there is a pragmatic question. A simulated mind is one approach to strong AI. But if strong AI is possible, then we have to watch out for the possibility of _unfriendly_ strong AI, these being simulated minds which are not sympathetic to human concerns/concerned with good treatment of moral patients in general.


Well, like any sentient being comes the right to defend itself. If a person attempts to kill me, I can kill them to protect myself if necessary.



> Being actively malicious would be unnecessary - a given function or process could simply go off the rails and cause the AI to want to accomplish a goal at all costs. Even worse would be unfriendly seed AI, which would be simulated minds capable of recursive self-improvement. An unfriendly seed AI would be catastrophic.


You seem to be describing an artificially intelligent entity without a conscience.


@wuliheron



> 1) We already simulate minds. The Turing test has been used to fool even experts for years.


The question is at what point do these A.I.s actually cross the line into sentience?



> 2) This is merely one of the first baby steps towards AI that approaches that of humans.


I find it baffling that anybody would want to do that. Once A.I. approaches and surpasses that of humans, our goose is cooked, lol



> 4) We can speculate about the moral implications of unicorns, fairies, and UFOs but I don't see it as being particularly productive.


I can agree that speculating about the moral implications of unicorns, fairies, and UFO's would be unproductive. However sentient A.I. will eventually happen. There isn't anything unproductive in discussing the issue before it actually arrives.


----------



## wuliheron

Valdyr said:


> What do we disagree about then? I'll do the metaphysics, you do the systems theory.


Exactly. A pragmatist might think you are wasting your time, but could not fault you for following your muse. You gave your position on the issue, and I offered an alternative.



Valdyr said:


> How is it "mysticism?" Mysticism is the attempt to commune with a divine reality, some spiritual truth, God etc. through the use of "alternative" epistemic routes like "intuition."


Linguistically speaking statements about life, the universe, and everything are no different from asking "What is the sound of one hand clapping." While superficially such questions might not seem to bear on issues of ultimate or divine reality or whatever, it is only the context that makes this clear.



Valdyr said:


> Metaethics is concerned with the foundations of morality - whether moral sentences can be true or false, whether the source of value (if it exists) is internal or external, etc. This is an instance of applied ethics.


Its an example of a metaethical argument against normative ethics.



Valdyr said:


> Sure we could, but we don't have very good reasons to believe those theories from the standpoints of inference to the best explanation, explanatory power, etc. This reeks of logical positivism - that the only things which can be true are either immediately empirically verifiable or are analytically true.
> 
> Just because both theories are speculative in nature doesn't mean they are _equal_. We're fairly certain, from our science and our philosophical arguments, that rocks aren't the sorts of things with mental attributes, no matter how many cultures believe so. No good explanation has been proposed as to how a rock could have a mind, or why it is that from the reality we experience, rocks having minds is a good explanation for their behavior. The debate, for example, between reductive and non-reductive physicalists about the mind are not in the same class as essentially random assertions. Can all facts about mental states be reducible to purely physical facts? This is a question worth asking, because they are competing explanations for the reality we experience (mental lives). They are not the only explanations, nor the whole explanation.


From a pragmatic point of view both are equally speculative because we don't have demonstrable evidence for either rocks or computers having minds and we don't even have a clear definition for "mind". Even the cargo cults had more to go on and look how they turned out.



Valdyr said:


> The whole point of "ethical speculation" is so that we aren't caught off guard if the (very possible) event occurs. I don't care what pragmatic contextualism thinks. The idea is "if minds, whatever they are, can be conscious, then they demand moral treatment, and we should think about it now so as not to trip over ourselves."


The road to hell is paved with good intentions.



Valdyr said:


> These are questions of ethics and philosophy of mind, but I don't agree with pragmatic contextualism as a philosophy of science.


Its a science, a method or tool, and not a school of thought. You might as well say you don't agree with using screw drivers.



Valdyr said:


> This sounds like virtue ethics to me. All I'm saying is that it's safer to ask the question sooner than later.


And all I'm saying is its not necessarily safer and could even be counterproductive by creating preconceptions and expectations that have nothing to do what we eventually have to deal with.


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> The question is at what point do these A.I.s actually cross the line into sentience?


That's assuming there even is a line. People have been debating whether animals have feelings much less sentience for eons.



RobynC said:


> I can agree that speculating about the moral implications of unicorns, fairies, and UFO's would be unproductive. However sentient A.I. will eventually happen. There isn't anything unproductive in discussing the issue before it actually arrives.


We already have computers that mimic people, the question is will they ever actually have minds and, if so, what are the ethical implications. My only argument is that without a clear definition of "mind" or "sentience" much less a clear example of a strong AI we might as well be debating how many angels can dance on the head of a pin.


----------



## RobynC

@wuliheron



> That's assuming there even is a line. People have been debating whether animals have feelings much less sentience for eons.


Humans do. So eventually that line will logically be crossed



> My only argument is that without a clear definition of "mind" or "sentience" much less a clear example of a strong AI we might as well be debating how many angels can dance on the head of a pin.


Well, self-awareness would be a start...


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> Humans do. So eventually that line will logically be crossed
> 
> Well, self-awareness would be a start...


I'd say with the incredible rate of progress in the sciences, especially neurology, it just doesn't make sense to speculate endlessly about things that people have already been speculating endlessly about for eons. Sometimes its just more productive to wait and see what turns up.


----------



## RobynC

@wuliheron

How many animals have passed the mirror test. It's a test whereby an animal is put in front of a mirror to see if it realizes the reflect of itself is itself.


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> How many animals have passed the mirror test. It's a test whereby an animal is put in front of a mirror to see if it realizes the reflect of itself is itself.


To the best of my knowledge only a handful of species. However, that is science and not strictly speaking philosophy. I don't have to wait for someone to provide scientific evidence or a logical argument that animals have feelings and awareness. Ultimately it always comes back to me and how I relate to myself and the world around me. Science and academia are wonderful tools for exploring such things in detail, but ethically speaking I usually get by just fine without them.


----------



## RobynC

@wuliheron



> To the best of my knowledge only a handful of species.


Evidently more than a handful...

Mammals
Humans
Bonobos
Chimpanzees
Orangutans
Gorillas
Capuchin monkeys
Elephants
Orcas
Bottlenose dolphins

Avians
European Magpies
Pigeons
Seagulls

Arachnids
Portia Labiata



> However, that is science and not strictly speaking philosophy.


Yes, it's science that shows that they possess self-awareness...



> I don't have to wait for someone to provide scientific evidence or a logical argument that animals have feelings and awareness.


Then what's the problem?


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> Then what's the problem?


The question was whether it is worthwhile to debate the ethics of a possible human like AI. I'd have to say no, its not worthwhile and I don't need a scientific study or some abstract metaphysical rationalization for applying ethics to strong AI. Quite the opposite, its more often the case that people use such a dependence on such things to justify unethical behavior such as classifying blacks as "subhuman".


----------



## RobynC

I think it is definitely worth debating the ethics since it's an issue that exists, like it or not.


----------



## wuliheron

RobynC said:


> I think it is definitely worth debating the ethics since it's an issue that exists, like it or not.


It isn't an issue yet, it is merely speculation at this point, and debate alone is not necessarily productive.


----------



## Aether

RobynC said:


> The question is at what point do these A.I.s actually cross the line into sentience?


When the AI starts inquiring into its own existence I reckon. Whether they'd have the means to communicate this is another question.



wuliheron said:


> That's assuming there even is a line. People have been debating whether animals have feelings much less sentience for eons.


Would you say there's even a remote possibility that protozoa (eg an amoeba) are sentient? No? Then there must be a line.


----------



## Aether

---snip---


----------



## wuliheron

Aether said:


> Would you say there's even a remote possibility that protozoa (eg an amoeba) are sentient? No? Then there must be a line.


If there are lines its pretty apparent we draw them. Mostly lines in the sand that we constantly erase and redraw somewhere else.


----------



## Sup3rSloth

wuliheron said:


> If there are lines its pretty apparent we draw them. Mostly lines in the sand that we constantly erase and redraw somewhere else.


Yeah, the line will keep moving, until the simulation can do exactly what a human can do.
By that stage, we won't be the ones making the decisions anymore...


----------



## RobynC

@wuliheron



> It isn't an issue yet, it is merely speculation at this point, and debate alone is not necessarily productive.


Actually it is not speculation. Humans have a brain and are sentient, and since we have used the mirror test as one way to gauge self-awareness _(though it is not necessarily the only way)_, it clearly appears that as we go down the animal kingdom to a surprising extent we see this phenomenon appear.


@Aether



> When the AI starts inquiring into its own existence I reckon.


This would be one criteria which would definitely be one to factor into the equation. The presence of various types of feedback loops would also be a valid one.



> Would you say there's even a remote possibility that protozoa (eg an amoeba) are sentient? No? Then there must be a line.


I would say a central nervous system, or a central nervous system analogue (i.e. a computer that works like one) would be a good working start


@Sup3rSloth



> Yeah, the line will keep moving, until the simulation can do exactly what a human can do.
> By that stage, we won't be the ones making the decisions anymore...


Very good point.


----------



## wuliheron

RobynC said:


> Actually it is not speculation. Humans have a brain and are sentient, and since we have used the mirror test as one way to gauge self-awareness _(though it is not necessarily the only way)_, it clearly appears that as we go down the animal kingdom to a surprising extent we see this phenomenon appear.


So what? Are you suggesting that because a pigeon and a seagull can recognize themselves in a mirror we must debate the ethics of a nonexistent AI?


----------



## RobynC

@wuliheron



> So what? Are you suggesting that because a pigeon and a seagull can recognize themselves in a mirror we must debate the ethics of a nonexistent AI?


If you read what I said: Considering animals lower on the evolutionary ladder than we would have previously suspected seem to possess self-awareness, the chances that we will create artificial sentience through a simulation and/or through the creation of artificial intelligence will be sooner rather than later.


----------



## wuliheron

RobynC said:


> If you read what I said: Considering animals lower on the evolutionary ladder than we would have previously suspected seem to possess self-awareness, the chances that we will create artificial sentience through a simulation and/or through the creation of artificial intelligence will be sooner rather than later.


That's not an explanation for why we must debate the ethics of such things.


----------



## RobynC

The reason we must debate the ethics is that 


 A sentient being should not be created for the purpose of somebody's intellectual curiousity
 A sentient being should have the basic right to not be owned by somebody else
 Experimenting on a sentient being, even if artificial, could be considered morally equivalent to torture
 Shutting down an artificially sentient being could be tantamount to murder.


----------



## wuliheron

RobynC said:


> The reason we must debate the ethics is that
> 
> 
> A sentient being should not be created for the purpose of somebody's intellectual curiousity
> A sentient being should have the basic right to not be owned by somebody else
> Experimenting on a sentient being, even if artificial, could be considered morally equivalent to torture
> Shutting down an artificially sentient being could be tantamount to murder.


Those aren't reasons for why it should be debated: it a list of personal ethics.


----------



## RobynC

@Razare



> Physics isn't understood, though, neither is our brain.


While the whole of physics is not understood, the fact that consciousness is a product of brain activity is pretty much established science. While I won't say that we understand everything about the brain, the basic process of consciousness involves feedback loops between the thalamus _(particularly the centromedian nucleus)_, the cortex, the hippocampus, as well as the claustrum _(which has feedback loops to all of the three previously mentioned structures)_.

To summarize, the process involves feedback loops between sensory processing and memory.


----------



## Manhattan

What's unethical about creating consciousness? If we create it, it's not a threat, and it wants to live, we should let it live. If it fails any of the above, destroy it. 

In either case, our focus should be on learning, not limiting ourselves.


----------



## RobynC

@ManhattanINTP



> What's unethical about creating consciousness?


The problem isn't so much creating consciousness -- every time a mother has a child she is creating consciousness. The problem I have is creating artificial consciousness for the purpose of some kind of experiment.



> If we create it, it's not a threat, and it wants to live, we should let it live.


The issue is granting it freedom. I wouldn't want to exist inside a box -- I'd want a body like one I have no where I can travel and interact with others.



> our focus should be on learning


There were lots of experiments carried out on humans without informed consent throughout history. While some of them were of arguable value, they are reviled. While some people will of course say "informed consent, and patient's rights get in the way of good research", I'd rather have a requirement for informed consent and patients rights so to speak.


----------



## absentminded

RobynC said:


> The issue is granting it freedom. I wouldn't want to exist inside a box -- I'd want a body like one I have no where I can travel and interact with others.


You're projecting what you would feel on something that could be structured fundamentally different from you. By your metric, that _could_ be considered immoral/unethical.


----------



## MrShatter

RobynC said:


> @ManhattanINTPThe problem I have is creating artificial consciousness for the purpose of some kind of experiment.


What is wrong with this?

noun /ˈkänCHəsnəs/ 
consciousnesses, plural

The state of being awake and aware of one's surroundings

Creating awake-ness, I see no problem, or experimenting on awake-ness.


----------



## RobynC

@MrShatter

Well, I'm largely talking about sentience, though the definition of conscious does include the definition self-aware. The problem is that you're creating an entity that might not be happy being trapped in a computer and unable to experience freedom. I think that is unethical. Furthermore experimentation could include things that would make it suffer which would be tantamount to torture, and turning off such an entity for good would be ethically tantamount to killing it.


----------



## MrShatter

RobynC said:


> @MrShatter
> 
> Well, I'm largely talking about sentience, though the definition of conscious does include the definition self-aware. The problem is that you're creating an entity that might not be happy being trapped in a computer and unable to experience freedom. I think that is unethical. Furthermore experimentation could include things that would make it suffer which would be tantamount to torture, and turning off such an entity for good would be ethically tantamount to killing it.


What if sentience is not equated with any sort of desire?


----------



## absentminded

@RobynC

I believe I've mentioned this already, but you're operation on assumptions about consciousness that are unproven deductively or inductively.

Evolutionary consciousness has desires because they are necessary for the replication of the computational machinery inside our cells. It is entirely possible to conceive an artificial consciousness that does not have needs or desires but still acknowledges its existence and can "think" after a sense.


----------



## RobynC

@MrShatter



> What if sentience is not equated with any sort of desire?


It would probably want to be free...


----------



## MrShatter

RobynC said:


> @MrShatter
> 
> 
> 
> It would probably want to be free...


Free from what?


----------



## Aether

RobynC said:


> It would probably want to be free...


Yet it could want nothing if desire isn't a requisite for sentience as the quote basically said.


----------



## RobynC

@MrShatter

Free from being confined to the simulation


----------



## MrShatter

RobynC said:


> @MrShatter
> 
> Free from being confined to the simulation


Humans are confined to situations, and we're not overtly stressed about it. Limits do not intrinsically make one unhappy


----------



## wuliheron

All the evidence we have thus far suggests the more intelligent the animal the wider the range of emotions. Mr Spock might make for great entertainment, but we've never discovered anyone actually like him and the speculation that anything remotely like a self-aware AI without emotions could exist is just that, total speculation.


----------



## RobynC

@wuliheron

Agreed


----------



## absentminded

@wuliheron

Yet the definition of consciousness and all theoretical methods of structuring and replicating it do not include, comment on or require the existence of emotions.

It is true that it is speculation, but it isn't necessarily unfounded.


----------



## wuliheron

absentminded said:


> @wuliheron
> 
> Yet the definition of consciousness and all theoretical methods of structuring and replicating it do not include, comment on or require the existence of emotions.
> 
> It is true that it is speculation, but it isn't necessarily unfounded.


Yeah, and before thermodynamics we had phlogiston theory. It was a logical and much more parsimonious account then our current theories, but it just happened to be dead wrong. When it comes to such complex systems as the human brain that we know so little about I tend to favor empirical evidence over theories based on ancient cultural biases. Often like phlogiston the only real purpose such theories serve is some place to at least begin to explore the issues.

I suggest reading up on Antonio Damasio, a Harvard neurologist who specializes in people who have lost the ability to emote. These are people who can't decide whether to get out of bed or tie their shoes simply because they have no motivation or emotional context. They behave much more like computers then conscious people sometimes being extremely open to any suggestion even when it is not in their best interest.


----------



## absentminded

wuliheron said:


> Yeah, and before thermodynamics we had phlogiston theory. It was a logical and much more parsimonious account then our current theories, but it just happened to be dead wrong. When it comes to such complex systems as the human brain that we know so little about I tend to favor empirical evidence over theories based on ancient cultural biases. Often like phlogiston the only real purpose such theories serve is some place to at least begin to explore the issues.


True. But because the theories we have are the only tools available to us, we have to use them to make guesses rather than sit twiddling our thumbs until a better theory comes along, because the better theories are the result of guesses not working.



> I suggest reading up on Antonio Damasio, a Harvard neurologist who specializes in people who have lost the ability to emote. These are people who can't decide whether to get out of bed or tie their shoes simply because they have no motivation or emotional context. They behave much more like computers then conscious people sometimes being extremely open to any suggestion even when it is not in their best interest.


Again, that's a study of natural consciousness. Emotions are the defense mechanisms we acquired before we learned to reason, so our brain is constructed on top of an emotional foundation. :sad:

Our understanding of human consciousness could very well be worthless in the AI arena as I've already said.


----------



## RobynC

@absentminded

What you are failing to comprehend is that artificial intelligence is based around producing computers that can reason like intelligent beings, of which all are natural. The fact that simulations are being created of animal brains indicates that scientists are trying to reverse engineer nature. This logically means that the artificial intelligence created would behave like natural intelligence.


----------



## absentminded

RobynC said:


> @absentminded
> 
> What you are failing to comprehend is that artificial intelligence is based around producing computers that can reason like intelligent beings, of which all are natural.


I haven't missed this in the slightest.



> The fact that simulations are being created of animal brains indicates that scientists are trying to reverse engineer nature.


Again, I understand completely.



> This logically means that the artificial intelligence created would behave like natural intelligence.


This is where your logic breaks down.

We don't know exactly _how_ we will succeed in creating AI and it very well could be as simple as

iamselfaware.exe

Just because we are simulating or emulating something does not mean that the product will have anything in common with the original beyond broad structural elements like a neural network or division of cognitive labor.


----------



## Chinchilla

RobynC said:


> @_Chinchilla_
> 
> 
> 
> So you would be for purposefully designing a sentient machine, a free entity like us that would be mentally ill? Isn't that kind of unethical?


I never said I would be for that. Don't draw assumptions unless they are reasonable.

For arguments sake, yes I would. The reward of such a thing is greater than the risk and the only problem with it is upsetting certain individuals. I see nothing ethically wrong with it anyways. Remember, morals and ethics are subjective.




> Well, if you're going to argue that they're free entites, they should have the means to be free.


 Are humans free outside of their bodies? No. Are humans free outside of earth? No. I was not talking about physical freedom, I was talking about philosophical freedom, i.e., free will. It might not be possible to transfer such an A.I. to a robot anyways.





> With the right technology it would be possible to monitor all activity in the brain. There of course comes ethical issues in this respect in and of it's own right -- it would be tantamount to mind-reading.
> 
> That could be done just via our growing knowledge of physics.


Yes it would, that's why laws would have to be put in place not to take away people's rights. The problem is, do we consider an A.I. a person?


----------



## RobynC

@wuliheron



> A) Its a computer, not a robot or Skynet. You can keep your hand on the off switch if it makes you feel better and don't have to give it so much as an internet connection.


If you wanted to create a technological singularity _(which you said you did)_, you would definitely have lots of artificially intelligent beings being created with ever improving methods of communication. It would be inevitable that their intelligence would become such that they would grow outside our means to control them.



> Its an artificial intelligence that can think thousand or millions of times faster then a human being and rapidly becomes far more knowledgeable and intelligent.


I understand that. But if was sufficiently intelligent, it could mentally outmaneuver all of us. If it decided that it didn't like mankind, our goose would be cooked.


@Chinchilla



> I never said I would be for that. Don't draw assumptions unless they are reasonable.


You did sort of imply it with this statement...
"We could also use it as a way to understand psychological disorders and problems"​


> For arguments sake, yes I would. The reward of such a thing is greater than the risk and the only problem with it is upsetting certain individuals.


If you claim that they have the same rights as us, how could you in good conscience, entertain the possibility of deliberately making an individual mentally ill? If a person was deliberately made mentally ill, it would be considered an atrocity.



> I see nothing ethically wrong with it anyways. Remember, morals and ethics are subjective.


That was 



> Are humans free outside of their bodies? No.


Uh, but our bodies can move around, and interact with the physical environment.



> I was not talking about physical freedom, I was talking about philosophical freedom, i.e., free will. It might not be possible to transfer such an A.I. to a robot anyways.


Well until that's do-able, I think that creating a sentient artificial intellect is unconscionable.



> Yes it would, that's why laws would have to be put in place not to take away people's rights.


There are already laws that prohibit the government from performing unreasonable search and seizures without probable cause, but they do it anyway. In many cases the surveillance methods are covert -- how do you defend your rights if you aren't even aware they're being violated? We only found out about these spying programs because whistleblowers exposed it _(and our administration is doing everything to try and crack down on such folks)_



> The problem is, do we consider an A.I. a person?


If an A.I. is sentient, it should be considered as such. Of course I doubt that would happen because the whole purpose for creating intelligent machines is to do work that humans would normally be required to do -- effectively mechanical slaves: If they were considered sentient, there would be ethical issues about forcing them to work for free, potentially in unsafe conditions, and being shut-down or thrown-out when they outlive their usefulness, so there would be a lot of reasons to classify them as being non-sentient even if they were.


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> If you wanted to create a technological singularity _(which you said you did)_, you would definitely have lots of artificially intelligent beings being created with ever improving methods of communication. It would be inevitable that their intelligence would become such that they would grow outside our means to control them.


A) I never said I wanted to create them. All I said was that these were the real advantages.



RobynC said:


> I understand that. But if was sufficiently intelligent, it could mentally outmaneuver all of us. If it decided that it didn't like mankind, our goose would be cooked.


B) So its going to trick you into destroying the world. Sounds like you should be writing horror novels rather then debating real technology.


----------



## RobynC

@wuliheron



> So its going to trick you into destroying the world.


No, what I said is that they would rapidly grow outside our means to control or stop them,


----------



## wuliheron

RobynC said:


> @wuliheron
> 
> No, what I said is that they would rapidly grow outside our means to control or stop them,


Again, its a computer and if you want you just unplug the thing.


----------



## Chinchilla

RobynC said:


> @_Chinchilla_
> 
> You did sort of imply it with this statement..."We could also use it as a way to understand psychological disorders and problems"​ If you claim that they have the same rights as us, how could you in good conscience, entertain the possibility of deliberately making an individual mentally ill? If a person was deliberately made mentally ill, it would be considered an atrocity.


That is a good point. The difference is between creating a mentally ill person or a mentally ill A.I. You could not ethically do that to a person. A person feels pain, has to be born, has to be cared for by a parent, etc. therefore there are too many emotions evolved in doing such a thing. For a human you might even have to cause psychological abuse for a mental illness to arise. I human has to be born and raised An A.I. on the other hand, you can just program to be that way, or you can edit it's psyche. You could manipulate a certain portion of code and change the whole A.I.'s personality. You do not have to do anything majorly ethically wrong to cause it to be "mentally ill" and you could always switch it back to an old state and even remove its memories. We are assuming a human brain simulation where it can be edited and manipulated with relative ease in real time.



> Uh, but our bodies can move around, and interact with the physical environment.


Yes, but we are constrained by just that, what our bodies can do in their physical environment. What I said was that we could not escape the confines of our bodies, i.e., we have limits. The A.I.s should have restraints as well. If they "escaped" into a network or the internet that could cause major problems.



> Well until that's do-able, I think that creating a sentient artificial intellect is unconscionable.


It might not even be possible.



Note: I am playing devil's advocate. I really do not have an opinion on the morality and ethics of an A.I. I'm essentially using you as an experiment so that I can develop my own opinions.


----------



## RobynC

@Chinchilla



> An A.I. on the other hand, you can just program to be that way, or you can edit it's psyche. You could manipulate a certain portion of code and change the whole A.I.'s personality.


If it's sentient, it doesn't really matter if it's a human or a computer -- if it feels pain, what does it matter? The problem with mental illness is the suffering it causes to the subject as well as others.



> Yes, but we are constrained by just that, what our bodies can do in their physical environment.


Regardless, our bodies have quite some capability. We have quite a wide range of movement, a lot of flexibility and dexterity allowing us to manipulate and use a wide range of objects.



> I am playing devil's advocate.


Understood


----------

