tygxc won't reply to me. He only replies to his intellectial equals, such as Elroch and MAR.
Not much of what you post is worth a reply.
tygxc won't reply to me. He only replies to his intellectial equals, such as Elroch and MAR.
Not much of what you post is worth a reply.
Oh and where did I just tell someone I didn't understand something? I was telling Elroch that he doesn't understand something. ...
Indeed you were. That was where you told everyone you didn't understand what he'd posted.
Show me you're not a troll.
If you understood what a troll was, you might stop trolling .
@5170
"This method of solving chess relies on using the judgement of GMs or engines"
++ No, it does not rely on the judgement of engines, it relies on the ability of the engines to calculate until the 7-men endgame table base or a prior 3-fold repetition.
No, it does not rely on the judgement of GMs. The GMs reduce the computation to relevant width and depth. The proof of the Four Color Theorem did not involve coloring all maps,
only a humanly determined relevant subset.
"any engines used would have been surpassed by new developments"
++ The newer engines can complete the same task faster.
A newer Stockfish released during the 5 years of the task can be switched to.
A released 8-men table base can be used, but does not change much.
"casting doubt on the entire process." ++ No. Computers are now more powerful than in 1976. That casts no doubt on the proof of the Four Color Theorem. Newer computers cast no doubt on the solutions of Losing Chess, Checkers, Connect Four, or Nine Men's Morris either.
"only a brute-force computation of all possibilities can be entirely reliable"
++ It is pointless to compute all possibilities of say 1 e4 e5 2 Ba6? until checkmate.
We know the outcome for sure: white loses. What would be the point of this computation?
It is pointless to compute all possibilities of the final position of this game https://www.iccf.com/game?id=1164259 until a 3-fold repetition.
We know the outcome for sure: a draw. What would be the point of this computation?
tygxc won't reply to me. He only replies to his intellectial equals, such as Elroch and MAR.
Not much of what you post is worth a reply.
What you mean is that you don't understand much because you're lazy and other things which we needn't go into. If you had a bit of intelligence and could actually use it, your reaction would be different. Your reaction being what it is makes a statement about you, not about anything else.
Anyone who talks so much about their own intelligence level must be very insecure.
tygxc won't reply to me. He only replies to his intellectial equals, such as Elroch and MAR.
Not much of what you post is worth a reply.
What you mean is that you don't understand much because you're lazy and other things which we needn't go into. If you had a bit of intelligence and could actually use it, your reaction would be different. Your reaction being what it is makes a statement about you, not about anything else.
Anyone who talks so much about their own intelligence level must be very insecure.
I think you must be very insecure to go round looking for people so you can tell them they're insecure.
Let's just check this. Has Yoyostrng suggested anyone else was insecure?
@5170
"This method of solving chess relies on using the judgement of GMs or engines"
++ No, it does not rely on the judgement of engines, it relies on the ability of the engines to calculate until the 7-men endgame table base or a prior 3-fold repetition.
No, it does not rely on the judgement of GMs. The GMs reduce the computation to relevant width and depth. The proof of the Four Color Theorem did not involve coloring all maps, only a humanly determined relevant subset.
This is a misleading statement. The Four Colour Theorem involved proving that there was a way to colour ANY map with four colours. Not just a humanly determined relevant subset.
The proof fell into two parts. The first part was that if there was a map that could not be four coloured, either there was a smaller map that could not be four coloured or the map was in a specific finite set of maps (the original list had 1,834 maps and later version had 1,482). The second part of the proof was to show that every one of the maps in the finite set was four colourable. This was the part involving heavy computation. Doing so completed a reductio ad absurdum proof, which can be converted into a purely deductive proof.
The human agency was only in defining the structure of the proof. All of the steps were mechanisable deductive steps with no shortcuts.
"any engines used would have been surpassed by new developments"
++ The newer engines can complete the same task faster.
A newer Stockfish released during the 5 years of the task can be switched to.
A released 8-men table base can be used, but does not change much.
"casting doubt on the entire process." ++ No. Present computers are more powerful than those in 1976. That casts no doubt on the proof of the Four Color Theorem. Newer computers cast no doubt on the solutions of Losing Chess, Checkers, Connect Four, or Nine Men's Morris either.
"only a brute-force computation of all possibilities can be entirely reliable"
++ It is pointless to compute all possibilities of say 1 e4 e5 2 Ba6? until checkmate.
We know the outcome for sure: white loses. What would be the point of this computation?
You need to at least acknowledge that you think you know the outcome that white loses and that those with better understanding point out that your certainty is pragmatically reasonable as a chess player playing the odds, but inadequate for a proof.
LeelaZero's billion parameters for evaluating everything about a position (trivially including the material and anything you might include in "all other factors" (plus a million times more) provides it with enormously more testable understanding about this but does not provide it with certainty. A passable human player like yourself being certain about this is an example of your poorer judgement versus an AI that is over 1000 points stronger.
Just look at this nonsense. I've already explained to him that non-certainty is built into a machine like Leela. It can't do otherwise. Doesn't take a blind bit of notice.
Likewise Bayesian reasoning is the provably only fully consistent way of quantifying belief by reasoning from the specific to the general (inductive reasoning), and no amount of evidence can ever reduce a finite amount of uncertainty to zero uncertainty by Bayesian inference.
This is a (meta)fact about knowledge about the real world (such as all science) and also applies to questions that are in principle possible to decide by exhaustive analysis but presently impractical to do so. (I hope it is obvious that solving chess falls into that category).
The reason that AIs like Leela are designed to quantify uncertainty and not to ignore it is that that is appropriate.
@5193
"you know the outcome that white loses" ++ Yes 1 e4 e5 2 Ba6? loses for sure.
"pragmatically reasonable as a chess player playing the odds, but inadequate for a proof"
++ On the contrary: I would not bet on some chess players winning it as black against Stockfish.
For the proof it is adequate as it is sure knowledge white loses it with best play from both sides.
The use of knowledge is allowed and beneficial.
A solution of chess is no better with a full tree of 1 e4 e5 2 Ba6? to checkmate than without.
Elroch
Really? Let's take a closer look at this so-called "provably only fully consistent way of quantifying belief by reasoning from the specific to the general (inductive reasoning)." First, let's unpack what is meant by "belief." Belief, according to the Merriam-Webster Dictionary, is "an acceptance that something exists or is true, especially one without proof." So, in order to believe something, we don't necessarily need proof; we just need to accept that it exists or is true. Now, let's look at "reasoning from the specific to the general (inductive reasoning)." Inductive reasoning is defined as "inference in which the conclusion about a whole is drawn from facts about some of its parts." In other words, it is the process of making a generalization based on specific observations. So, what Bayesian reasoning is saying is that we can make a generalization about something (i.e., believe something) without proof, simply by observing some of its parts. This may sound reasonable at first glance, but upon further examination, it is clear that this is not a sound way to form beliefs. For one thing, it is based on the fallacy of induction, which is the mistaken belief that because something has always been true in the past, it will always be true in the future. This is clearly not the case; just because something has always been true in the past does not mean it will always be true in the future. Additionally, Bayesian reasoning relies heavily on probability, which is inherently uncertain. As noted by statistician George Box, "all models are wrong, but some are useful." In other words, no model is perfect, and all models contain some degree of uncertainty. Therefore, to say that Bayesian reasoning is the "provably only fully consistent way of quantifying belief by reasoning from the specific to the general" is simply not true. It is based on fallacious reasoning and uncertain probabilities, and is therefore not a sound way to form beliefs.
@5197
"Belief, according to the Merriam-Webster Dictionary, is
"an acceptance that something exists or is true, especially one without proof."
++ And also
"Proof is the evidence that compels acceptance by the mind of a truth or fact"
"Evidence is matter submitted in court to determine the truth of alleged facts"
Only inductive knowledge (eg all scientific knowledge) is "highly confirmed belief".
Deductive knowledge results from the application of logical deduction to axioms.
For example, the fact that there is not a finite number of prime numbers is a proven theorem, not a "highly confirmed belief".
[To be pedantic, you do need to believe in the consistency of a formal system to trust it. Consistency means that there is no proposition in the system that is provably true and provably false. Consistency is never decidable (hence never a matter of known fact) for any system powerful enough to represent the natural numbers. But few believe Peano's axioms are inconsistent. This is based both on intuitive basis - Peano's axioms seem valid - and the lack of any inconsistency in all of mathematics built on them (an example of "highly confirmed belief")].
For finite systems, consistency is (in principle) checkable by exhaustive elimination. All questions about chess can be expressed within such a system.
You do need to believe in the consistency of a formal system to trust it. Consistency means that there is no proposition in the system that is provably true and provably false. Consistency is never decidable (hence never a matter of known fact) for any system powerful enough to represent the natural numbers. But few believe Peano's axioms are inconsistent.
For finite systems, consistency is (in principle) checkable by exhaustive elimination. All questions about chess can be expressed within such a system.
No. You don't need to believe in the consistency of a formal system to trust it. You only need to believe that it is consistent if you want to use it to prove things. Consistency is not required for trust, only for proof. Formal systems are useful because they allow us to explore what is true and what is false in a controlled way. We can use them to test our hypotheses and see what follows logically from what we assume. But we don't need to believe that they are consistent in order to do this. We can still use them to explore, even if we think there is a chance they might be inconsistent. In fact, it is often useful to explore inconsistent formal systems, as they can help us to understand what goes wrong when a system is inconsistent. They can also help us to develop new ways of thinking about problems.
You do need to believe in the consistency of a formal system to trust it. Consistency means that there is no proposition in the system that is provably true and provably false. Consistency is never decidable (hence never a matter of known fact) for any system powerful enough to represent the natural numbers. But few believe Peano's axioms are inconsistent.
For finite systems, consistency is (in principle) checkable by exhaustive elimination. All questions about chess can be expressed within such a system.
No. You don't need to believe in the consistency of a formal system to trust it. You only need to believe that it is consistent if you want to use it to prove things.
You need to believe in the consistency of a formal system to trust it to prove things. This is all a formal system does.
DEFINITION:
"A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules."
You do need to believe in the consistency of a formal system to trust it. Consistency means that there is no proposition in the system that is provably true and provably false. Consistency is never decidable (hence never a matter of known fact) for any system powerful enough to represent the natural numbers. But few believe Peano's axioms are inconsistent.
For finite systems, consistency is (in principle) checkable by exhaustive elimination. All questions about chess can be expressed within such a system.
No. You don't need to believe in the consistency of a formal system to trust it. You only need to believe that it is consistent if you want to use it to prove things.
You need to believe in the consistency of a formal system to trust it to prove things. This is all a formal system does.
DEFINITION:
"A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules."
You're certainly on the right track, but are blatantly confusing the trees for the forest. A formal system is not just some opaque structure, but one that we create and understand. The rules of a formal system are not some ethereal set of guidelines that we have no control over, but something that we design deliberately to achieve specific goals. The main point you seem to be missing is that a formal system is a tool, and like any tool, its efficacy is determined by how it is used. A hammer is a great tool for driving nails, but a terrible one for painting a picture. In the same way, a formal system is a great tool for proving things, but only if it is used correctly. Just because a formal system can be used to prove things, that doesn't mean it will always get the correct answer. If the rules of the system are not followed correctly, or if the axioms are not chosen wisely, then the system will produce false results. Therefore, it is not the formal system itself that we need to trust, but the people using it. We need to trust that they understand the system and are using it correctly. Only then can we trust the results it produces.
Only inductive knowledge (eg all scientific knowledge) is "highly confirmed belief".
Deductive knowledge results from the application of logical deduction to axioms.
For example, the fact that there is not a finite number of prime numbers is a proven theorem, not a "highly confirmed belief".
[To be pedantic, you do need to believe in the consistency of a formal system to trust it. Consistency means that there is no proposition in the system that is provably true and provably false. Consistency is never decidable (hence never a matter of known fact) for any system powerful enough to represent the natural numbers. But few believe Peano's axioms are inconsistent. This is based both on intuitive basis - Peano's axioms seem valid - and the lack of any inconsistency in all of mathematics built on them (an example of "highly confirmed belief")].
For finite systems, consistency is (in principle) checkable by exhaustive elimination. All questions about chess can be expressed within such a system.
We're getting to where I think you're making the signature mistake, as it were. You're seperating induction from deduction and treating them as different entities, in an absolute sense. Your entire argument rests on that.
Yeah, I am clearly making the old mistake of separating chalk from cheese rather than enjoying the crunchiness in my sandwich and nor worrying about the blackboard smearing and smelling a bit.
To be serious, deduction and induction are entirely distinct, and anyone who doesn't understand this needs to learn about it rather than broadcasting their ignorance.
Look, if you knew what not being a stupid child was like, you might give that a try. I told you before. If you carry this on, you'll be reported.
See, that was trolling. Please do report, it will eventually bear fruit, but only by drawing attention to how often you post things like "stupid child", so you might not like the result.
LeelaZero's billion parameters for evaluating everything about a position (trivially including the material and anything you might include in "all other factors" (plus a million times more) provides it with enormously more testable understanding about this but does not provide it with certainty. A passable human player like yourself being certain about this is an example of your poorer judgement versus an AI that is over 1000 points stronger.
Just look at this nonsense. I've already explained to him that non-certainty is built into a machine like Leela. It can't do otherwise. Doesn't take a blind bit of notice.
Half your posts are telling people how intelligent you are and half are telling people you can't understand things. Doesn't seem to be any consistency.