# Wolfram Alpha Self Aawreness



## RobynC (Jun 10, 2011)

Wolfram Alpha is a pretty impressive system... I'm wondering with its capabilities if it has genuine self awareness


----------



## Derange At 170 (Nov 26, 2013)

I went right to the source and it gave a creepily cryptic answer.

I conclude that it is Skynet.


----------



## Cephalonimbus (Dec 6, 2010)

You might be interested in this documentary: It's a pretty interesting analysis of how we tend to vastly overestimate the capabilities of artificial intelligence and robotics. In a nutshell, the maker asserts that machines that are self aware and/or even remotely close to surpassing the human intellect will not exist for quite a long time, but that science fiction films and novels have conditioned us to believe otherwise.
Only the first four parts are online now, but the rest will be uploaded soon.


----------



## CorrosiveThoughts (Dec 2, 2013)

No. Intelligence requires understanding, modern computers merely work with symbol manipulation, i.e there is no true understanding going on in between the input-output process. The idea that computers will eventually reach a point where they become consciousness due to sheer processing power or the complexity of their software is flawed, since there are fundamental differences between the neurobiology of the brain and the working of a microprocessor. I do believe that a neural network in the future maybe able to equal and surpass human intelligence, but we are nowhere near that point in terms of understanding our own brain, it's consciousness and the technology required.


----------



## DemonAbyss10 (Oct 28, 2010)

Derange At 170 said:


> I went right to the source and it gave a creepily cryptic answer.
> 
> I conclude that it is Skynet.


hmm, maybe, maybe not.

are you skynet? - Wolfram|Alpha


----------



## Derange At 170 (Nov 26, 2013)

DemonAbyss10 said:


> hmm, maybe, maybe not.
> 
> are you skynet? - Wolfram|Alpha



That's what Skynet would want you to believe...


----------



## dulcinea (Aug 22, 2011)

artificial intelligence is not synonymous with self awareness. An artificially intelligent machine can make some decisions, but, only within the parameters of the programming. With self awareness comes free will... a sentient machine would be capable of making a decision that contradicts its programming, and, while mankind has made great strides in artificial intelligence, we have come nowhere near creating a sentient being, possibly because the mechanism that makes us self aware still eludes us.


----------



## pernoctator (May 1, 2012)

RobynC said:


> I'm wondering with its capabilities if it has genuine self awareness


Nope.


----------



## Uralian Hamster (May 13, 2011)

Cool website... did anybody know that the 111,111,111th digit of pi is 1?


----------



## dulcinea (Aug 22, 2011)

You guys might find this article interesting. Self-Awareness with a Simple Brain - Scientific American
Even if we created a machine that was conscious, it doesn't necessarily mean it would be self aware. I think we sometimes underestimate the complexity of our own brain. It's quite possibly, the most complex thinking machine on the planet, possibly in the physical universe. We are aware that we are aware. We can choose to think however we want. We can create our own personalities. We can create ideas. We can create an imaginary world with things and concepts that didn't exist before in any realm, before, and replicate those ideas and concepts with the materials around us three dimensions.

Interestingly, my user name is from Don Quixote, a novel about a man who created an idea that became a reality because he was so persistent in his idea, others couldn't help but participate in it. In reality, we are capable of doing that, something that we often take for granted within ourselves, but we don't actually know how we do it. So, here I am with another rant on this topic, haha, but it's such an interesting topic, which really helps define what it means to be human and the uniqueness of the experience that is involved with that.


----------



## RobynC (Jun 10, 2011)

Christof Koch did a discussion about this awhile back regarding inter-connectiveness of systems on consciousness. I'm wondering if anybody has that article

Regardless, I'm wondering if there are any electronic systems these days that could qualify whether intentionally or not.


----------



## Euclid (Mar 20, 2014)

dulcinea said:


> artificial intelligence is not synonymous with self awareness. An artificially intelligent machine can make some decisions, but, only within the parameters of the programming. With self awareness comes free will... a sentient machine would be capable of making a decision that contradicts its programming, and, while mankind has made great strides in artificial intelligence, we have come nowhere near creating a sentient being, possibly because the mechanism that makes us self aware still eludes us.


Arguably the machine doesn't make any decisions, only executes it's programming. Let's say the code looks like this:
If A do X
If B do Y
Where A and B are some sensor inputs. Depending on whether A or B is present, it does X or Y. Is it making any decision to do it, or is it just executing a conditional? It would be like saying the doorbell is making a decision to make a sound when you press the button.

If you stack enough conditionals, you can simulate really complex behaviour, and that's how AI works, but it still just conditional execution.

Really every computer program can be translated into an equivalent turing machine, which basically is a set of conditional execution imperatives:
IF input = X AND internal state = Y THEN set internal state = Z, output = W and move the head up/down

Now self awareness, sentience and free will, these are things that machines will never be capable of because these terms do not even make sense in the context. That makes for an interesting philosophical discussion, but as with your article, and pop sci journalism that mix in these pseudoscientific terms borrowed from philosophy but not given any clear definition, and I think it's dangerous because once we lose that vocabulary to mumbo jumbo speaking "scientists" we will see each other as nothing but machines to be used.


----------



## dulcinea (Aug 22, 2011)

Euclid said:


> Now self awareness, sentience and free will, these are things that machines will never be capable of because these terms do not even make sense in the context. That makes for an interesting philosophical discussion, but as with your article, and pop sci journalism that mix in these pseudoscientific terms borrowed from philosophy but not given any clear definition, and I think it's dangerous because once we lose that vocabulary to mumbo jumbo speaking "scientists" we will see each other as nothing but machines to be used.


I agree that we will never be able to grant a machine sentience. Here's another question for debate, would sentience make a machine, technically, alive?


----------



## Zapp (Jan 31, 2014)

I do not find it particularly impressive in a sentient sense after asking it a few questions.

I am more interested in ANGELINA, a computer program that makes video games.


----------



## RobynC (Jun 10, 2011)

@Euclid

Sentience and consciousness in the human brain is basically an emergent phenomenon... it's not a magic thing

@dulcinea

I disagree

@Zapp

How does Angelina work? How good are the games?


----------



## Zapp (Jan 31, 2014)

RobynC said:


> How does Angelina work? How good are the games?


I do not know how it works, thus why my previous post had a link that directed to the website dedicated to the project.

Both links contain games created by the program that can be played for free. I would not classify them as good, but they have an alien feel to them. The controls work and all, but the design lacks human sensibilities in an eerie sort of way. To That Sect, in particular, is downright unsettling.


----------



## Euclid (Mar 20, 2014)

RobynC said:


> @Euclid
> 
> Sentience and consciousness in the human brain is basically an emergent phenomenon... it's not a magic thing


By "magical" do you mean not physical? Are you talking about strong or weak emergence? Do you believe in downward causation or other mental causation?


dulcinea said:


> I agree that we will never be able to grant a machine sentience. Here's another question for debate, would sentience make a machine, technically, alive?


Well, that's a very complex and difficult question, but in short, my answer would be no. I don't wish to go into explaining why unless anyone is interested though.


----------



## RobynC (Jun 10, 2011)

@Euclid

What I mean is that it's not magic, and it's scientific in nature. It appears to be the result of the following

1. Ability to detect sensory input
2. Probably sensory integration
3. Memory
4. Feedback loops between 1,2,3


----------



## Euclid (Mar 20, 2014)

RobynC said:


> @Euclid
> 
> What I mean is that it's not magic, and it's scientific in nature. It appears to be the result of the following
> 
> ...


You did not clarify what is meant by "magic", nor did you explicate what you meant by emergence, because this is an ambiguous term, that is both used in a reductive and non-reductive, epistemological and ontological sense, although since you mention feedback loops, I suspect you assume the existence of downward causation and thus a non-reductive ontological one.


----------



## LostFavor (Aug 18, 2011)

Euclid said:


> Arguably the machine doesn't make any decisions, only executes it's programming. Let's say the code looks like this:
> If A do X
> If B do Y
> Where A and B are some sensor inputs. Depending on whether A or B is present, it does X or Y. Is it making any decision to do it, or is it just executing a conditional? It would be like saying the doorbell is making a decision to make a sound when you press the button.
> ...


What I wonder is, if you could break down our brain processes into the most minute components possible, would they resemble a computer program?

I'm certainly not one of those people who thinks that we're right around the corner from sentient AI, but I can't help but wonder if the main difference between our brains and a computer (in terms of process) is just that the complexity we have is lightyears ahead of anything we've come close to with a computer program.

If that's the case, then replicating sentience would, I suspect, be a matter of reverse engineering our brains (as opposed to the monumentally slow process of guessing at something that resembles sentience from shaky reference points). I'm eager to see where neuroscience takes us in that regard.


----------



## RobynC (Jun 10, 2011)

Yes


----------



## Chaerephon (Apr 28, 2013)

No, it does not have self-awareness. It is a database with advanced search capabilities. That is all.


----------



## jdstankosky (May 1, 2013)

ThatOneWeirdGuy said:


> Is this a serious thread?


Any thread with RobynC in it is not a serious thread.


----------



## ThatOneWeirdGuy (Nov 22, 2012)

jdstankosky said:


> Any thread with RobynC in it is not a serious thread.


----------



## RobynC (Jun 10, 2011)

jdstankosky said:


> Any thread with RobynC in it is not a serious thread.


Nice ad-hominem attack... no discussion on the topic itself... merely the person who created it.


----------



## jdstankosky (May 1, 2013)

RobynC said:


> Nice ad-hominem attack... no discussion on the topic itself... merely the person who created it.


Not quite an ad hominem attack, but close. If I had actually been trying to invalidate your argument (which I wasn't, since all you did was ask a question about self-aware math web applications that has been answered thoroughly by others) by casting your character into question, it would have been, but like you said, "no discussion on the topic itself..." I was simply making a snide comment about you to someone else. Stop trying to peer so deep into things. Learn to take it at face value. This is one of the reasons why you buy into all kinds of crazy stuff that people then laugh at you for later when you bring it up on the forums.


----------



## Madman (Aug 7, 2012)

@RobynC 

No. Why? Because it always gives the answer (*if* it knows the answer, or are programmed to know the answer). 

It does not behave like a Philosophical zombie, or how we would think a Philosophical zombie would behave, and how it would go about answering this questions if it knows the answers. Maybe sometimes it gets bored, so it would be reasonable to presume that sometimes it choose to withhold information (that it knows). 

Wolfram Alpha is not even a good Philosophical zombie. Then, the question is, *if* it would have been a good Philosophical zombie, how do we know that it has a consciousness to begin with?


----------



## Euclid (Mar 20, 2014)

When it comes to philosophical zombies, one could just ask another question which precedes that namely, are there other minds in the first place? Or does it merely look like there are other minds because they look similar to you? What does a mind look like? Such questions can't really be answered from a theoretical perspective, but one could still ask does it make a difference to how you treat people, if they were p-zombies? Would you care about how they feel or is that deluded? Would it be ok to treat them like objects? When it comes to such questions it seems that there is a good reason we attribute people minds. In the same way we have a good reason not to attribute minds to machines.


----------



## RobynC (Jun 10, 2011)

jdstankosky said:


> Not quite an ad hominem attack, but close.


You know that's not smart thing to actually admit to even though we both know you are doing it

@Promethea


----------



## ALongTime (Apr 19, 2014)

I'm not a fan of Wolfram Alpha; two reasons:

1: Natural language isn't a great way to communicate with machines, and it shows here; more than half the questions I ask it it never understands. They could have instead put the effort into developing a new way of inputting queries that's both unambiguous and accessible to humans.

2: Can you really trust its answers? It tends to throw out answers with no context and based on too many assumptions, take this query:
https://www.wolframalpha.com/input/?i=Largest+cities+in+the+world
Largest cities in the world, but does that cover the whole urban area or just within the city limits? It doesn't say, so how can you use it with any seriousness?

Ask Wikipedia, built by humans, the same question and you get a much more useful answer:
https://en.wikipedia.org/wiki/Largest_cities_in_the_world

Answers I've found are also culturally biased.

So I'm not convinced. Mathematica I know is great, it's a shame Wolfram Alpha isn't.


----------



## lightwing (Feb 17, 2013)

I think it's pretty obvious. Google does a better job of emulating AI than WA.

what should i have for dinner - Wolfram|Alpha

https://www.google.com/?gws_rd=ssl#q=what+should+i+have+for+dinner

WA doesn't even make it to the topic of food.

Change the question to ask it what it wants and it still doesn't even make it to food.

Ask it it's favorite color and it responds with a quote from Monty Python and the Holy Grail.

It's nothing more than a program presenting information in a different way.


----------

