Sorry, Sqod. You're talking gibberish. You're throwing together terms in a way which betray that you don't actually know what the words mean. I'm not surprised that 'proposals' based on that kind of (lack of) understanding have been rejected.
For anyone else, I thought it curious why Sqod kept referring to _artificial_ neural networks when people in the field refer to it as neural networks or more often just "neural nets" (in writing it's often just ANNs). It's because he hasn't looked much further than the wikipedia article on the topic which happens to be called "Artificial Neural Networks". Cellular automata? Another term he's thrown in which he seems to have a minimal understanding of.
Good starting point for the buddying young AI researcher/pioneer: "Artificial Intelligence: a modern approach" by Russell and Norvig (2013). It covers a great many topics including neural networks far beyond that. (info not available in popular science books he's quoting like it's the freakin' bible.)
Cellular automata are actually very very simple and you can get the basic ideas down in a few minutes. However, they're little more than just a mathematical curiosity at first. They're only ever actually useful in very peculiar circumstances. Message me if you'd like to know whether or how cellular automata might be able to help you (I'll be honest -- they're fun but rarely useful).
Both neural networks and cellular automata tend to attract quacks because they're easy to get started with and laced with mysterious promise. In physics, you get inundated by emails from well-meaning lunatics who think they've created a perpetual motion engine; in mathematics, simpler proofs of Fermat's last theorem are apparently still a thing; in cryptography ... Lord only knows.
So remember kids: it's your time and your mind -- don't let (people) waste either.
Artificial Intelligence


Shahh...... clearly you are an expert in this field of developing AI. Your knowledge is enlightening. That being said.. I sincerely hope the forum doesn't turn into a Shouting match. Spod brings to the table what I see as an alternative, novel idea. Whether it pans out remains to be seen. Calling names as "quackery" and "gibberish" weakens your credibility imo. Original ideas thru out history are debunked by the establishment. Many eventually are proven false, but a few lead to tremendous breakthroughs. Let history be the judge. Your approach of I am right.. you are wrong doesn't win any debate with ne.

I would like to hear from laymen... such as myself... at what point will a computer be judged to simulate human behavior? The reference to "when it can make a pizza" was quite good...made by Shahh. Spods assertion that true AI will be achieved not by a program.. but by an alternitive approach is facinating.
mdinnerspace, it's unclear whether you're inquiring about AI that is developed exclusively for chess or about general AI...
Realistically, you don't really need human-like AI for chess. That'd be overkill and a waste ot time. Current engines easily beat the strongest human players already. In fact, it's a waste of time trying to develop an AI that simulates human behavior just for any sort of specific task which can be done by simple logical machines with crude algorithms. These are more than enough and much more convenient. Chess engines could benefit with more sophisticated heuristics, but that doesn't necessarily mean making them truly human-like.
If the discussion is about general AI then I'm not sure why you mentioned "chess programs" in the opening post...
Anyway, this discussion has made me interested in hearing more about Sqop's project. Would it be possible for Sqod to give us a general outline of his new paradigm?
If it's some sort of ANN, I'm afraid it isn't really new.

Well, I brought up chess programs because I was under the impression that at the early stages of development AI might be easiest measured if a chess program made "original moves, human like". I realize the field is vastly diversified now. But is not chess a "tool" to verify AI? The "program" need not be specific to chess, HAL comes to mind from the movie 2001 if u r familiar.

MDS -- Frankly, I wouldn't ordinarily waste my time with quacks like Sqod but you yourself seem like a curious layman (no offence). You might think I'm being a dick to him but from my perspective he's being a dick to you by taking you for a fool. I find it especially offensive that he throws in imposing and 'clever' sounding technical terms in order to seem knowledgeable but he misuses them and betrays his lack of knowledge. It's glaringly obvious to me but obviously not to you if you're unfamiliar with the subject and just asking a general question.
This gimmick is common in many other fields from homoeopaths pretending they know something about medicine, chemistry and quantum mechanics, astrologers pretending they know something about astronomy, and mystics pretending they know something about cosmology. The esteemed reputation that the sciences have in the public imagination means that there is no end to the number and variety of bullshitters who will try to impress you by pretending to be privy to some special insight which you simply aren't clever enough to appreciate.
In contrast, I'm pointing you to a commonly used university undergraduate text for laying down the foundations in artificial intelligence. I'd be happy to lend you my own dog-eared copy if you were my neighbour (and I could find it). I wish it was cheaper on Amazon but if you can afford it then I'd recommend you get it so you have something to refer to tell apart the bullshitters from the real deal.
But even without it there's an easy way to catch out most bullshitters: they'll present themselves as having special insight which "the establishment" is too closed-minded to recognise and you aren't clever enough to appreciate and here are a few quotes from 'authoritative' sources which a pleb like you is in no position to challenge. However politely put, that's all Sqod has said and I'm pretty sure you realise that that's not how someone who actually has knowledge in some area and is sincere about sharing it would conduct themselves.
But all that aside, would you like a quick run down of what cellular automata are since Sqod wants to baffle you with the term? It's far far simpler than you'd imagine from the fancy sciency name it has. The basic concept would probably pass as a children's board game.
But is not chess a "tool" to verify AI?
I don't think so. I'd say chess can only verify chess proficiency and little else. It's a simple logical framework, after all. You don't need "thinking" machines for that.
There aren't clear boundaries to general AI. And intelligence is subjectively judged more than anything else. So "verifying AI" is a rather vague concept.
And yes, I'm familiar with Kubrick's 2001. Love it. My assumption is that the scene where HAL is playing chess with the human crew is just a visionary strike from the filmmaker. More symbolic than realistic in nature.

Shahh... thanks for the help. Wish I were more clever and appreciative of the facts. 1 problem. I knew a Physics Professor at Stanford University with an IQ off the charts (verifiable). He left academia due to perceived dogma of the establishment within the scientific community. He taught me to question most things as they may not be as they appear. "Things That Remain To Be Discovered" by John Maddox makes good reading.

Interesting idea. Sometimes when my GPS guides me through shortcuts it seems like an "orginal" and "creative" insight. But it is just using it's large resources at her disposal.

MDS -- That would be a comparable situation if said physicist thought electrons are made of jelly. It's clearly the dogmatic establishment which is incapable of accepting such an idea?! Actually, no. He can decide to use the word 'electron' but anyone who cares enough about physics to acquaint themselves with the basics can recognise that whatever it is he's referring to (which may or may not in fact be made of jelly) is definitely not an electron.
The common phrase relevant to your dilemma is "You want to be open-minded. But not so open-minded that your brains fall out."

Well... last I checked my brains are still partially intact. I have "sometimers" and tend to forget things. Your description of the current status of AI is very enlightening, informitive stuff. Please keep the 4 letter expletives to yourself. Appears you are angry about disagreement with your expertise.

MDS -- Disagreements are actually great. It's the bread-and-butter of scientific progress. But what annoys me is pseudoscience where someone borrows scientific terms to fraudulently make a non-idea seem sophisticated.
As far as the term "bullshit" goes, it's actually in common usage among physicists and other groups of scientists to refer to that and several other varieties of fraudulent behaviour. If it interests you then philosopher H. G. Frankfurt provides some clarification in his essay "On Bullshit" and his follow-up "On Truth". Both have been published in short book form and are very well written.

Interest in this thread seems to have ended. I want to thank the experts that showed up and shared their insights for us laymen.

Interest hasn't ended: your thread was merely overrun by trolls and jerks. Highly educated professionals aren't going to stick around a thread or even site like that to be insulted and to read vulgarities from some ignorant fool. I've had that happen before on this site when I posted a thread to try to get into some very heavy technical topics: the moderators finally deleted the entire thread at my request. This site simply isn't cut out for that type of discussion. I've had it happen on other forums, too, that were open to the general public and its immature adolescents.
A man convinced against his will is of the same opinion still.
----------
(p. 58)
Repetition and nesting are widespread themes in many cellular
automata. But as we saw in the previous chapter, it is also possible for
cellular automata to produce patterns that seem in many respects
random. And out of the 256 rules discussed here, it turns out that 10
yield such apparent randomness. There are three basic forms, as
illustrated on the facing page.
(p. 90)
But about once every 10,000 randomly selected rules, rather
different behavior is obtained. Indeed, as the picture on the following
page demonstrates, patterns can be produced that seem in many
respects random, much like patterns we have seen in cellular
automata and other systems.
So this leads to the rather remarkable conclusion that just by
using the simple operations available even in a very basic text editor, it
it still ultimately possible to produce behavior of great complexity.
(p. 627)
In the past it was often thought that logic might be an appropriate
idealization for all of human thinking. And largely as a result of this,
practical computer systems have always treated logic as something
quite fundamental. But it is my strong suspicion that in fact logic is
very far from fundamental, particularly in human thinking.
(p. 628)
But from the discoveries in this book we now know that highly
complex behavior can in fact arise even from very simple basic rules.
And from this it immediately becomes conceivable that there could in
reality be quite simple mechanisms that underlie human thinking.
Certainly there are many complicated details to the construction
of the brain, and no doubt there are specific aspects of human thinking
that depend on some of these details. But I strongly suspect that there is
a definite core to the phenomenon of human thinking that is largely
independent of such details--and that will in the end turn out to be
based on rules that are rather simple.
(p. 629)
But a crucial point is that on their own such processes will most
likely not be sufficient to create a system that one would readily
recognize as exhibiting human-like thinking. For in order to be able to
relate in a meaningful way to actual humans, the system would almost
certainly have to have built up a human-like base of experience.
(p. 630)
No doubt as a practical matter this could to some extent be done
just by large-scale recording of experiences of actual humans. But it seems
not unlikely that to get a sufficiently accurate experience base, the system
would itself have to interact with the world in very much the same way
as an actual human--and so would have to have elements that emulate
many elaborate details of human biological and other structure.
(p. 675)
Fairly straightforward modifications to the universal cellular
automaton shown earlier in this chapter allow one to reduce the number
(p. 676)
of colors from 19 to 17. And in fact in the early 1970s, it was already
known that cellular automata with 18 colors and nearest-neighbor rules
could be universal. In the late 1980s--with some ingenuity--examples of
universal cellular automata with 7 colors were also constructed.
But such rules still involve 343 distinct cases and are by almost
any measure very complicated. And certainly rules this complicated
could not reasonably be expected to be common in the types of systems
that we typically see in nature. Yet from my experiments on cellular
automata in the early 1980s I became convinced that very much simpler
rules should also show universality. And by the mid-1980s I began to
suspect that even among the very simplest possible rules--with just two
colors and nearest neighbors--there might be examples of universality.
The leading candidate was what I called rule 110--a cellular
automaton that we have in fact discussed several times before in this
book. Like any of the 256 so-called elementary rules, rule 110 can be
specified as below by giving the outcome for each of the eight possible
combinations of colors of a cell and its nearest neighbors.
(p. 715)
The key unifying idea that has allowed me to formulate the
Principle of Computational Equivalence is a simple but immensely
powerful one: that all processes, whether they are produced by human
effort or occur spontaneously in nature, can be viewed as computations.
(p. 742)
So when computational irreducibility is present it is inevitable
that the usual methods of traditional science will not work.
And indeed I suspect the only reason that their failure has not been
more obvious in the past is that theoretical science has typically tended
to define its domain specifically in order to avoid phenomena that do
not happen to be simple enough to be computationally reducible.
(p. 748)
So what does this mean for science?
In the past it has normally been assumed that there is no
ultimate limit on what science can be expected to do. And certainly the
progress of science in recent centuries has been so impressive that it has
become common to think that eventually it should yield an easy
theory--perhaps a mathematical formula--for almost anything.
But the discovery of computational irreducibility now implies
that this can fundamentally never happen, and that in fact there can be
no easy theory for almost any behavior that seems to us complex.
(p. 782)
In the early 1900s it was widely believed that this would
effectively be the case in all reasonable mathematical axiom systems.
For at the time there seemed to be no limit to the power of
mathematics, and no end to the theorems that could be proved.
But this all changed in 1931 when Godel's Theorem showed that
at least in any finitely-specified axiom system containing standard
arithmetic there must inevitably be statements that cannot be proved
either true or false using the rules of the axiom system.
This was a great shock to existing thinking about the foundations
of mathematics. And indeed to this day Godel's Theorem had continued
to be widely regarded as a surprising and rather mysterious result.
(p. 791)
But from the discoveries in this book it now seems quite certain
that vastly simpler examples also exist. And it is my strong suspicion
that in fact of all the current unsolved problems seriously studied in
number theory a fair fraction will in the end turn out to be questions
that cannot ever be answered using the normal axioms of mathematics.
Wolfram, Stephen. 2002. A New Kind of Science. Champaign, IL: Wolfram Media, Inc.

Interest hasn't ended: your thread was merely overrun by trolls and jerks. Highly educated professionals aren't going to stick around a thread or even site like that to be insulted and to read vulgarities from some ignorant fool. I've had that happen before on this site when I posted a thread to try to get into some very heavy technical topics: the moderators finally deleted the entire thread at my request. This site simply isn't cut out for that type of discussion. I've had it happen on other forums, too, that were open to the general public and its immature adolescents.
MDS -- Isn't that interesting? His claims have to be vague because they're proprietary but he wants to get into "very heavy technical topics". Which is it? A discussion for "highly educated professionals" except when those here and those he's previously pitched to have consistently been amused by his comments are "the establishment"/ "trolls"/ whatever. I'm sure.
He's been asking people to delete threads where he's been criticised? That's a very big red light. Serious professionals just 'win the argument' by substantiating their claims. I recommend you keep your own record of all of the comments here in case he decides to get trigger happy with deleting or editing some of his claims if/when more actual experts wade in. You'll find he keeps changing his story as more weaknesses are pointed out.
The only comments I've seen which directly address your OP with relevant, useful and interesting resources for getting at an understanding are coming from POWERJOHN (my own being more about Sqod's comments which stands at odds with that). In particular, I think POWERJOHN's suggestion that you take a closer look at HIARCS makes sense. It would probably help you clarify your own questions. I'd also recommend you send POWERJOHN a PM and ask him if he thinks there's anything in Sqod's claims. If you're lucky, he may be willing to go into some careful detail.
In the meanwhile, for appraising credibility of claims being made:
A special note about Sqod's latest appeal to authority: Stephen Wolfram's "A New Kind of Science". That book is just so confusingly written that it's been responsible for bewildering quite a few laymen other than Sqod. As such, the main text can be interesting but is of limited value. But check out the End Notes. There you get an extremely valuable organised and detailed resource. Unfortunately many entries are mathematically heavy so unless you already have that background or are willing to learn the mathematical techniques being employed then they're probably going to be of limited value to you.
Anyway, good luck making headway with your questions. You're welcome to contact me if you want but I do think POWERJOHN is more your man for clarification.

Real food for thought by Spod... can normal axioms of mathematics answer future questions with absolute certainity? Physicists grounded in the established community (Shahh) say yes. Pardon me if I read this incorrectly. I particularly like Spod's remark "of a strong suspicion that in fact logic is far from fundemental, particularly in human thought".

MDS -- Actually, Sqod was just citing the a personal opinion of Wolfram where there are no tangible findings to refer to. That's actually one of the worst things about his book "A New Kind of Science", Wolfram freely mixes vague opinions with comments about very tangible findings. It's one of the things which makes it a minefield of confusion for lay readers like Sqod.
In that particular quote, Wolfram is either dead wrong or very vaguely right (depending on how you read it). Here's a breakdown:
"In the past it was often thought that logic might be an appropriate idealization for all of human thinking." -- Technically true from prior to AI as a serious topic and for the early history of AI. After the limitations of the earliest AI systems became apparent, approaches inspired by biological systems came to the fore again (neural nets and cellular automata being two good examples of old approaches which do that.)
"And largely as a result of this, practical computer systems have always treated logic as something quite fundamental." -- Actually quite false. Computer development has treated logic as fundamental because they where developed to solve problems which humans are bad at (logic and mathematics). Efforts toward AI started out using what they already had to work with and then took mixed approaches as the limitations became apparent.
"But it is my strong suspicion that in fact logic is very far from fundamental, particularly in human thinking." -- Apart from being a personal opinion vaguely posed, I don't think any neuroscientists would take issue with the last part. The human brain is notoriously bad at reasoning logically and as such logic could not be fundamental to human thinking. Wolfram basically wrote something very vague and then concluded something well known. It's a weird sentence.
"and better algorithms are simply the wrong approach, I believe."
-- your belief stands in conflict with actual progress with AI.
No, it doesn't. You need to make the distinction between GOFAI (good old-fashioned AI) and AGI (artificial general intelligence), which is where a lot of laymen get confused. GOFAI has been making slow progress for decades, but it is very slow progress, and still is nothing like true intelligence, not even matching that of a reptile, much less a human. There are very few people who believe AGI has been created yet--if it did we'd probably be racing toward The Singularity right now, and newspapers would be headlining the astronomical progress in every field every day--so you're trying to compare practical applications of digital computers with a technology that doesn't even exist yet, and is likely to require some machine that doesn't even qualify as a computer, like an artificial neural network.
"because it would suggest that something fundamentally different is being done to process the incoming information in an intelligent manner."
-- You mean like a different algorithm? Sure. But not a mystery to the programmers who developed the algorithm.
No, intelligence is not algorithmic, which is why I said that better algorithms simply won't work. I'm referring to a completely different computing paradigm. Some examples of alternative computing paradigms are artificial neural networks and cellular automata. There exist many more.
Different question which might help you think this through. Are cars 'good' cars?
i) If you think they are then when did cars become good? Can you think of a turning point for bad cars to good cars? Was it obvious before the turning point? Did people just prior to that see a previous advance as THE turning point?
ii) If you don't think cars are good then maybe you'll want to set the turning point at flying cars? If we're playing that game then I'd prefer to set the turning point at warp-drive-enabled, time travelling cars with laser cannons. (ideal for the school run). Just sayin'.
Per that kind of analogy, you can ask if digital computers are "good" computers. I claim that digital computers are simply the wrong tool for intelligence, just as biological computers (i.e., brains) are simply the wrong tools for high-speed, accurate mathematical computation and high-volume storage of verbatim data. Or you can ask if cars are "good" transportation. Same thing: cars are great for paved roads for certain speed ranges, but are the wrong tool for extremely rough terrain, where walking machines would excel, or for extremely high speed travel, where airplanes would excel, or for traveling underwater, where submarines would excel. No one tool can do all tasks efficiently--such a universal tool (e.g., car) would be a bulky monstrosity that would be impractically large and expensive. The type of task or goal for which a tool is designed is critically important to consider, and in fact is a part of my own definition of intelligence.
----------
(p. 183)
One circumstantial argument in favor
of this conclusion that I personally find appealing is that right from the
word go, the field of AI attracted some of the smartest thinkers around.
When so many very bright minds, provided with enormous resources,
failed to achieve their goal, it makes sense to look for a reason. And the
most obvious explanation in this case is that the goal is an impossible
one.
Pascal put the point quite clearly--and bluntly--in his collection Pen-
sees, written in 1670:
These principles [involved in reasoning] are so fine and so numerous that a
very delicate and very clear sense is needed to perceive them, and to judge
rightly and justly when they are perceived, without for the most part being
able to demonstrate them in order as in mathematics. . . . Mathematicians
wish to treat matters of perception mathematically, and make themselves
ridiculous . . . the mind . . . does it tacitly, naturally, and without technical
rules.
Pascal's words might seem unnecessarily cruel when quoted in ref-
erence to twentieth-century work in AI, but remember that Pascal was
himself a mathematician. He was not decrying mathematics or mathe-
maticians. He was simply pointing out that, although mathematical
thinking has many uses, it cannot be applied to everything, and the func-
tioning of the human mind is one of the things to which it cannot be
applied.
Devlin, Keith. 1997. Goodbye, Descartes: The End of Logic and the Search for a New Cosmology of the Mind. New York: John Wiley & Sons, Inc.