Forums

Pros and cons of chess

Sort:
Elroch

It is interesting to identify the reasons that we infer consciousness or the lack of it.

Firstly, we all know that one person is conscious (ourselves).

Almost everyone infers that other people are conscious, as we are similar to them, and they act in a roughly similar way to us. (There is a philosophic school called solipsism that eccentrically refuses to make this inference).

Most people infer that at least higher animals have some consciousness. This consensus has increased greatly over time, to the point where Western countries consider cruelty to fish a crime. [as a recent circus act in Australia and a kid who thought of something amusing to do with a microwave have found out]. Apparently lobsters are not yet considered conscious in law.

So it seems most of the animal kingdom is widely considered to be conscious, based on animals having some structural similarity to us, having senses and reacting to stimuli.

The only condition a sophisticated robot fails is the structural similarity to us  - they are made of silicon, motors etc. rather than neurons, muscles etc.

It is a fact that if machines resemble animals, people do react to what happens to them a little like for animals, and it not difficult to believe attitudes would change to them if they behaved more like living creatures in general.

chessroboto

This was exactly my point of argument in-between the philosophical posts in this thread: Why are we narrowing the basis of consioucness and intelligence to humans only?

You mentioned "higher" animals, and you cut off the legal category to fish. At what point is the category going to be lowered? When we've included everything that is organic? Or down to a point that humanity will not starve or malnourish itself?

And again, you mentioned how we as humans perceive "higher" animals to be and how the world's age-old act of eat-or-be-eaten-to-survive deems as "crueltly" to a species. Is it cruelty because we as humans feel pain and sorrow to watch the species be butchered? Is it because we know how they would feel?

What is the basis of cruetly? Is it the infliction of pain? What is pain? Isn't it the information sent through the nervous system to the brain and its interpretation?

If that so, then crustaceans and insects would feel pain, too, especially when their primary system is one huge nervous system. Does that mean that they qualify as animals that should be given the same degree of consideration for cruelty?

Finally, the primary weakness of the argument of consciousness starts with the fact that it is limited to how we as humans understand consciousness. Anything else that is not like a human being cannot be defined as being conscious. Anything that does not behave or think like humans cannot have intelligence even in its lowest form.

Is there a school of thought wherein the concept of consciousness, intelligence and sentience are not limited by the parameters of a human being? (And please don't tell me that it would be Engineering. Wink)

Elroch

As I consider consciousness a fuzzy concept, it will be no surprise that I don't really believe in a sharp dividing line between conscious and unconscious, merely a difference of degree.

I too have thought of the issue of whether stopping a circus act which involved swallowing and regurgitating live fish was more about the sensibilities of the audience than about protecting the fish.

An interesting link on the inorganic: Best robots of 2009

chessroboto

So when we take the perception of consciousness that is not limited by the parameters of a human being back to robots or chess engines, how to we answer the familiar questions?

1. Are robots with sensors really conscious? Yes, but not the same as humans?
2. Are chess engines intelligent in playing chess moves? Yes, but not the same as humans?

trysts

RC_Woods wrote:



I think we could meet each other half way here. Sometimes, people use a concept that isn't well defined as a get-out-of-jail-free card to sustain their argument. In those cases, they should specify to defend the argument and not doing so would be sophistry. But on the other hand, sometimes the common definition is narrow enough to sustain a point. In those cases hammering continuously on narrowing it down further is unnecessary and can be seen as a sophist trick to stall the discussion.

I'm assuming you are into sophistry, so perhaps it is just a mild disagreement. I don't think it is necessary to completely halt discussion on the definition of 'intelligence', but I can agree that some added specifics would help. In short, I'm not sure if a broad concept always begs for active participation beyond the broad definition.

You and I don't agree, so I won't meet you "half-way". Also, I'm not "into sophistry". You running to the aid of Elroch and basically saying that since you are satisfied with the word "intelligence" left to a dictionary definition, I should be satisfied, unless I'm playing tricks or something, is wrong.

The book I linked you too is actually not so much a book on Kim's own contributions, but rather an introduction in several theories of mind. Jaegwon Kim himself mentions in the introduction that most philosophers agree with ontological physicalism, and I would bet he has some inside knowledge.

  It's just arrogance that would make a statement like "most philosophers agree with...".  I would say he fancies his readers to be pretty dumb, because his readers have access to centuries and centuries of disagreements called The History of Philosophy. So no, I wouldn't bet on him having inside knowledge.

That being said, I'm not against some active participation on my side. Considering the fact that only dualism presumes there is more than ordinary matter, and the fact that there are many more theories that are being actively pursued, makes his belief seems reasonable. Furthermore, I think the arguments against dualism are stronger than those in favour of it, so I would expect most philosophers to favour physicalism.

Hmmm... You expect most philosophers to favour physicalism...hmmm...Maybe in a 1000 years or so when you expect a lot of things to happen?

Referring to the robot-automobile discussion:

I agree with tryst that neither robots nor cars should be expected to have a mentality. I disagree with the statement that it is ridiculous for any philosopher to believe a mechanical or electrical mind could be created.

Where does that "statement" appear in this thread?

Quite to the contrary, I think most philosophers wouldn't have theoretical objections.

I'm glad you keep reminding me what "most philosophers" think, that way I am reminded that you're full of poop.

Elroch is spot on in that respect: our mentality seems to stem from a physical object (the brain), and there is no apparant reason to assume that a physical object that copies all of its functions would not exhibit the same mentality.

That being said, I do think that creating an exact copy of the mind wouldn't mean we understood what a mind really is. In that sense, success there would only mean our technology is amazingly advanced. That, I don't foresee happening in the next 1000 years.

Thanks Nostrdamus, for yet another update of your 1000 year-sense.

To understand exactly what it is we would create, and perhaps to create it in other ways than stupidly mimicking the human brain, would require more understanding. It is perhaps even harder to predict when (and if) we would reach such levels of understanding anytime soon.

The point of the argument was that, since brains are physical and have mentality, it is reasonable to assume that physical objects can be created with mentality. I do not think this is philosophically retarded or outdated. I would agree with tryst that to speculate about it being done is science fiction, as it strongly appears to be out of reach for the next 1000 years or more.

Referring to the current state of physics and quantum theory:

Actually most of this knowledge isn't that new. It dates back to the beginning of the 20th century. Some important things have been discovered recently, but in general philosophers have no excuse not to be aware of the current state of physics.

The quantum world provides room for exciting thought experiments, but currently there is serious doubt that quantum effects would influence the relatively huge brain cells that supposedly make up our mind. If they did, quantum effects still don't guarantee magical wizardry. Think about it, a dice roll is random but we can describe it pretty well can't we? Quantum mechanics is just lots and lots of dice rolls. The average numbers can be deduced fairly easy, and on any larger scale the averages start behaving as set values in systems that usually work deterministically.

Let's see, you gave us another 1000 year update, you let us know that philosophers should book-up on "the current state of physics", and then you give a mini-lecture on Quantum Mechanics which of course, sounded just as empty as when you tell us what most philosophers think.

RC_Woods

wow tryst, maybe you should consider yoga.. the anger! 

before I continue my reply, just so I said it: I unfortunately messed up when I said you "are" into sophistry, because I meant "aren't". (serious) (Would also be more in character with the charactor of my post, right..)

I was on the side of Elroch because of post #141. You ridiculed the ideas he put forward, and I disagreed. The intelligence thing was secondary.

I doubt that Jaegwon Kim fancies his readers dumb. You probably do too.

I don't think commenting on (contemporary) majority opinion in any field is by definition arrogant. And I said I thought majority opinion to be X, not that it simply is the case. If you have reasons to think otherwise, feel free to share.

I don't predict the future, I just make an estimate that something is far out. 1000 is just a number, I'm not NostradamusLaughing (link to a legendary dutch stand up comedian on the man).

Calm down in general.

And lastly, if you really think people should live up to their ramblings, then so should you. You are (quite literally) throwing pooh at the moment. Not your usual style.

trysts

Oh, okay. I wasn't angry when I wrote that. I was just not satisfied with the emoticons I went through. Smiley face? Nah. Laughing face? Nah. Angry face? Nah. I had no emoticons.

RC_Woods
trysts wrote:

Oh, okay. I wasn't angry when I wrote that. I was just not satisfied with the emoticons I went through. Smiley face? Nah. Laughing face? Nah. Angry face? Nah. I had no emoticons.


You used bold blue letters to say, more or less, that I was an empty sounding arrogant dumb Nostradamus full of pooh. This unusual lack of subtlety, more than the lack of emoticons, spurred my belief that you had become mildly enraged. After all, it seemed out of character compared to your regular style.

I am however happy that you remained calm, and I do hope the emoticon artists soon pick up on the angerlaughsmile. Laughing

Elroch

[Note: a good reason for not being insulting and abusive in a forum is that such actions inevitably lose the respect of most other people]

It is my belief that the biggest difference between current machines made by humans, and humans themselves is a matter of specificity of function. There are many specific things which humans can do which are considered highly demanding and respected by many which machines can either do now, or there is no reason why they cannot be able to do them in the future (for example playing chess (now), walking on two legs (now), solving theorems in a rigidly defined area of mathematics (now, in some areas), doing surgery (only helpers at present, likely to be able to do whole tasks in the future, using sophisticated sensory apparatus), driving from one location to another safely (now), making idle chit chat (now), building cars etc. (now), hoovering the floor of a room (now), mowing a lawn (now), etc.

A huge difference is that a human being does not act in a way which is reasonable to describe as being defined by predetermined rules, or to achieve predefined tasks. Darwin might say that humans are de facto designed for the single purpose of propagating their genes, but this seems a poor explanation for all human behaviour.

Humans are born flexible, and can learn to understand an infinite number of different things, and to generate an infinite variety of functional behaviours. But is this degree of generality beyond machines?

I feel a key part of it is a totally flexible ability to create models of parts of the world in our minds and to manipulate the model in order to determine our actions.  To some extent this is hardwired. Our genes determine the general function of large parts of our brain, then our environment (including education) fills in the data in some of these parts.

There is a lot more I could say, but I'll just jump to the key question.

Is there any reason why a machine could not be as flexible and adaptable as a human, and capable of forming useful models of new things and phenomena in its world in a way that could be used to form new objectives, determine how to achieve them through a combination of experimentation and prediction of the results of actions, and to put the plans arrived at into action?

trysts
RC_Woods wrote:
trysts wrote:

Oh, okay. I wasn't angry when I wrote that. I was just not satisfied with the emoticons I went through. Smiley face? Nah. Laughing face? Nah. Angry face? Nah. I had no emoticons.


You used bold blue letters to say, more or less, that I was an empty sounding arrogant dumb Nostradamus full of pooh. This unusual lack of subtlety, more than the lack of emoticons, spurred my belief that you had become mildly enraged. After all, it seemed out of character compared to your regular style.

I am however happy that you remained calm, and I do hope the emoticon artists soon pick up on the angerlaughsmile. 


First, I never called you "dumb".Laughing Second, I don't know if I've ever posted anything that was subtle, and I can't tell if I have a "regular style", because of the variety of responses I get on the internet. But it is damned cool of you to allow me to express my opinion in the way I have, without you crying, or getting freaked out by it, which does occur from peeps out there. I am quite happy to drink to you, RC_Woods!Laughing

RC_Woods

@ Elroch

I think its much too hard to build any machine like that today, but it may sometime be possible. 

I do wonder what role motivation would play. I mean, humans are not only capable of doing what you just described, they do so naturally. 

Why would a hugely powerfull electrical machine start exploring the world? Or start analyzing it? How to include motivators? 

I'm thinking our motivation to be active at all is a direct result of evolution. We just have that urge, and we may think it is the most rational thing.

I'd laugh hard if our super AI would just sit there and be done with it. Tongue out

RC_Woods
trysts wrote:
RC_Woods wrote:
trysts wrote:

Oh, okay. I wasn't angry when I wrote that. I was just not satisfied with the emoticons I went through. Smiley face? Nah. Laughing face? Nah. Angry face? Nah. I had no emoticons.


You used bold blue letters to say, more or less, that I was an empty sounding arrogant dumb Nostradamus full of pooh. This unusual lack of subtlety, more than the lack of emoticons, spurred my belief that you had become mildly enraged. After all, it seemed out of character compared to your regular style.

I am however happy that you remained calm, and I do hope the emoticon artists soon pick up on the angerlaughsmile. 


First, I never called you "dumb". Second, I don't know if I've ever posted anything that was subtle, and I can't tell if I have a "regular style", because of the variety of responses I get on the internet. But it is damned cool of you to allow me to express my opinion in the way I have, without you crying, or getting freaked out by it, which does occur from peeps out there. I am quite happy to drink to you, RC_Woods!

You kinda argued that J.Kim would have to fancy his readers dumb if they were to accept in any form the exact statement I just quoted, so I deduced that therefore I would have to be fancied dumb Laughing.

And eh... I can't delete your posts so what am I going to do about that opinion of yours!! Tongue out

I'll have a drink on you as well though. A very cold beer. Yumm. 

chessroboto
Elroch wrote:
Is there any reason why a machine could not be as flexible and adaptable as a human, and capable of forming useful models of new things and phenomena in its world in a way that could be used to form new objectives, determine how to achieve them through a combination of experimentation and prediction of the results of actions, and to put the plans arrived at into action?

The quick answer is no, there is no reason. What can be said is that there is no machine that has been developed yet to do all the things listed in your challenge question.

I suppose that is why the word "sentient" is used on the tv show, "Star Trek: TNG." The ability to learn, adapt, communicate is evidence to prove that the alien species is indeed an "intelligent and conscious" life form on a planet that could potentially evolve into a "warp-capable" species, the de facto standard in the ST universe.

Once a man-made machine with sentience has been created, then the philosophical discussions about machines being alive, conscious, intelligent and moral would be much more interesting and less futile than using chess engines and automobiles of today.

trysts
chessroboto wrote

Once a man-made machine with sentience has been created, then the philosophical discussions about machines being alive, conscious, intelligent and moral would be much more interesting and less futile than using chess engines and automobiles of today.


There is a variable involved in looking at a machine as a sentient being. That is, the psychology of the particular person "believing" it. Since people around the world already "believe" almost anything told to them, I would not doubt, at all, that if "Fox News", for example, were to report that "scientists" have invented a vaccum cleaner that feels and perceives, people would believe it.Laughing

planeden

i'm lost.  is quantum philosophy a pro or con in chess?  does string theory help or heal?

trysts
planeden wrote:

i'm lost.  is quantum philosophy a pro or con in chess?  does string theory help or heal?


And "if you know", it's merely "word association".

MyCowsCanFly
planeden wrote:

i'm lost.  is quantum philosophy a pro or con in chess?  does string theory help or heal?


 "Silly String Theory"

chessroboto
trysts wrote:
There is a variable involved in looking at a machine as a sentient being. That is, the psychology of the particular person "believing" it.

That goes without saying for most arguments and discussions. As in any scientific and clinical experimentation: When all the factors needed to establish someone's belief have been satisfied, then a theory is finally proven to be sound.

Speaking of which, there is a new argument to prove a god was needed to for creation to even happen. Forget the testing part though. Stephen Hawking is back to stir up some religions: http://singularityhub.com/2010/09/07/stephen-hawking-says-god-is-unnecessary-new-book-and-video/

Is this related to chess? Of course! Wink

EDIT

MyCowsCanFly

I thought Hawking retired and took up tennis.

Elroch
trysts wrote:
chessroboto wrote

Once a man-made machine with sentience has been created, then the philosophical discussions about machines being alive, conscious, intelligent and moral would be much more interesting and less futile than using chess engines and automobiles of today.


There is a variable involved in looking at a machine as a sentient being. That is, the psychology of the particular person "believing" it. Since people around the world already "believe" almost anything told to them, I would not doubt, at all, that if "Fox News", for example, were to report that "scientists" have invented a vaccum cleaner that feels and perceives, people would believe it.


It already exists, trysts. Robotic devices have a tactile (or radar-like) sense and since "perceives" is a general term encompassing all senses, it is a redundant qualifier.