..
quote: " There is nothing beyond scope, because the size variation for anything is upwards in scale."
..
quote: " There is nothing beyond scope, because the size variation for anything is upwards in scale."
I read TENM back in the 80s, and I was not convinced by it. I saw no solid argument in it that prevented a computer simulation with possible random inputs producing all of the complexity of life, or which we are just one example. We are chemical machines made up fundamentally of electrons and nuclei in a 4-D space interacting according to the laws of quantum physics. Such a system can in principle be simulated (including statistical random elements) in a vast supercomputer (very big, and very slow at present). But I don't believe the building blocks are the essential thing, so a (suitably randomised) simulated model of the brain, together with sensory inputs suitably encoded should be in principle able to simulate all of the patterns of a living brain. I am not convinced by Penrose's hand-waving appeals to some fundamental quantum mechanical difference between a brain and a computer.
Penrose, being a materialist, had to put himself far out to find a "new science" of the brain, he believed. However, the example you mention here is under a lot of fire from the Lucas-Penrose Thesis. See for example, Lucas's website with more links for pro-Godelian case arguments. Shadows of the Mind was designed to be a more rigorous case. There are several books that argue for the Lucas-Penrose Thesis.
..
amenhotepi here,
i read: "Kant rejects the view that space must be either a substance or relation. Instead he came to the conclusion that space and time are not discovered by humans to be objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences."
http://en.wikipedia.org/wiki/Space
...
do those who have read the emperor's new mind [the book] or 'other agree .. ?
I have heard of such viewpoints, and I sometime want to look up more of what Daniel Dennet says about this, as he proposes an "evolutionary engineering" explanation of this, but I don't know much about his version.
However, I disagree profoundly with them. So far I support the idea of space and time being objective becuase I look at quantum gravity from Causal Dynamical Triangulations, so I think there is good grounds for holding that if there are causal sets beyond space and time but realated to them, I think space and time are certainly not mere tools for our understanding.
Thanks amenhotepi for your link on space, it didn't take me long to get lost in the subject of Epistemology (the theory of knowedge) as it was one of the first definitions in the article. I can realize the theory of time and space in correlation with studying one subject to understand another. Thank you quark for this one forum i shall never be bored again!!!
Ben Dewey, it's basically a reductio proof that starts with assuming that the mind is computational/algorithmic and then shows Godel sentences that we should be able to obtain by insight but we would be outwitting ourselves by adding these statements to new programs, etc.-we would have to outwit ourselves, which is impossible...so the original assumption we made must be false. That's basically it. If you want, go to JR Lucas's homepage, where he has a link to a book he wrote in which he gave the early argument.
A proof that this argument is flawed may be given by the following argument. Assume (reasonably) that the brain functions in a statistically predictable way according to the laws of quantum field theory, applyed to its initial state. Clearly this quantum system can (in principle) be modelled to any required degree of accuracy using finite difference modelling on a large digital computer (this is hugely impractical, but this is a thought experiment, not an engineering solution). Hence the behaviour of the brain may be modelled to any required statistical degree of accuracy using a digital computer.
The flaw may lie in an over rigid assumption about how a computer may be used to attack problems. Perhaps it is analogous to the use of a Monte Carlo simulation, where one does not attempt to solve a problem completely, but rather to randomly generate examples and make inferences about the convergence of conclusions to the truth given an increasing amount of evidence. I am not claiming that this process is entirely reliable (nor that human beings are entirely reliable, as history proves). The true operation of the brain in attacking problems is more of a combination of the inductive and the deductive, with a lot of guidance by approximate pattern matching sprinkled with some lucky leaps.
When there is a proof that appears very plausible (a coherent model), it still takes second place to a proof that follows necessarily. Perhaps there can be no such model, if the LP thesis is correct. But perhaps I misunderstand you....
Well, my brief last post proved to my satisfaction that it is possible (in principle) to build a computer which replicates the behaviour of a human brain to any accuracy required. This in my view overturns the philosophical position that human brains are in some sense fundamanentally more capable than artificial ones. Proofs that claim to show otherwise must make unwarranted implicit assumptions about the way in which problems are dealt with by computers or brains.
I don't see so clearly how quantum mechanics has much room to play in the brain. Max Tegmark conducted experiments against Roger Penrose's ideas that microtubes could have some role at the quantum level in cognition, for example.
On the contrary, it is believed that quantum field theory describes the behaviour of any physical system with absolute precision (except where quantum gravity would be needed). All chemistry and physics amounts to quantum mechanics (quantum field theory, for greatest precision, if computational resources were not a constraint) and general relativity (for which a trivial Newtonian uniform field approximation is more than adequate here) and these theories are completely known in low energy systems such as this. Of course, for most practical purposes, other theories are used that are accurate approximations in a particular domain of application, but that doesn't change the situation.
One thing that might be considered a significant addition to the two fundamental theories might be the statistical analysis of systems with large numbers of similar components, such as a gas, rather than the ludicrously cumbersome solution of the equations of motion (many parallel simulations are required since all parameters have uncertainty). But such statistical theories do not seem very relevant to something like the brain, where every neuron has a specific location and connections with its neighbours, and where signals pass along well defined paths. In principle, where there is uncertainty in a system, all you need to do is do many deterministic simulations with parameters varying to give precisely the right statistics.
amenhotepi: You quoted "Kant rejects the view that space must be either a substance or relation. Instead he came to the conclusion that space and time are not discovered by humans to be objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences." The trouble is that the reference here is to *inner space* and *inner time*. Kant gives a description of how we organize our subjective experiences, while leaving the question of how the things-in-themselves are arranged largely unaddressed (although I *think* he thought that inner space and inner time had to correspond to some sort of outer space and outer time--but I don't remember that very well and could be mistaken about it). His concern is not with the structure of an external spacetime, but rather with the structure of internal experiencing. One might view his account as the first major work of phenomenology (although I wouldn't want to bet that nobody had written one before that I'm simply not thinking of).
strangequark: It has been many years since I read TENM, and all I remember of it is a reference to solitons and the impression that he was making a mistake. I recall reading a review in which the reviewer put precisely what I only vaguely had in mind--alas, however, I do not remember what it was!
However, if the argument is supposed to go something like this: Suppose we are Turing machines. Then we have characteristic Godel sentences, whose truth or falsity we are unable to determine. But <fill in a Penrosean demonstration that we really are able to determine the truth of some such sentence>. Therefore, we cannot be Godel machines. Then, I think, we could easily reply that although each Turing machine has its own characteristic Godel sentence(s), our brains grow and change, so that we do not remain the same Turing machines over time and our characteristic Godel sentence(s) change, too. We are able to adapt to the discovery of a characteristic Godel sentence by changing, so that the one the demonstration showed us able to evaluate is no longer a characteristic Godel sentence. Yes, there will be another--but we can change to evaluate that, too. And so on. Only if the demonstration were that we could determine the truth of *any* Godel sentence of *any* Turing machine (or perhaps some class of Godel sentences of possible Turing machines meeting some specifications I couldn't name) would the proof have a chance of going through.
I think a hidden flaw in Penrose's argument in terms of its practical consequences (and the one which makes it possible for him to make it despite the existence of my one page thought experiment argument that shows that the brain can be emulated to any degree of accuracy by a computer) is that he envisages the existence of an idealised human intelligence, that is able to do everything that we might consider it possible for a human to do. Real brains are finite, limited and not entirely reliable. The finiteness condition is rather important, since it is trivial that some proofs are just too big to fit in a human brain, in the Earth, or in the Universe, assuming it is finite. Curiously, the idealised computer (a general Turing machine) does not have this limitation, so perhaps it could turn the tables.
Well, apart from the fact that it, too, doesn't actually exist.
Supposedly, Penrose analyzed the amount of computation necessary for a human to compute what is called "G(H)", and he said it was possible. But what do i know.
However, if the argument is supposed to go something like this: Suppose we are Turing machines. Then we have characteristic Godel sentences, whose truth or falsity we are unable to determine. But [fill in a Penrosean demonstration that we really are able to determine the truth of some such sentence]. Therefore, we cannot be Godel machines.
This is clearly flawed, because if X is a characteristic Godel sentence of Penrose's brain, then he cannot articulate the above proof using X as the "Penrosean demonstration," because he doesn't know whether X should be true or not!
OK, the books have been on my shelves for 20 years. I guess it's time to read them.
Hello all members,
This is what should become a hard core math, physics, and logic team! Not just for fun and recreation, but to have some serious learning too. I won't be too hard on everyone ;). I don't expect us to win every vote chess game or team match, but please try hard.
This group has been named after one of Roger Penrose's books, The Emperor's New Mind. TENM is a book covering a wide range of math and physics topics, but is more than just a popular book.
This is the first book where Penrose advances The Godelian Case against Strong A.I. The book is heavily spiced with discussions of the mind, where possible quantum effects may occur, etc.
Specifically, Penrose (and fellow inventor of the argument, J.R. Lucas) uses Godel's Theorems to try to disprove mechanism. In his later book, Shadows of the Mind, Roger Penrose rebuts the many criticisms to his original argument.
Of course, the whole book made a major impact, and whether or not you agree with Penrose or Lucas, it's on fascinating read!
Happy playing and good synapses everyone.