#1853
No:
"Nodes per second (NPS) is the main measure of chess engines’ speed. It indicates the number of positions the engine considers in one second."
Please see the informative reference link.
It even explains the tradeoff between a simple evaluation and deep search (superior) and a sophisticated evaluation and shallow search (inferior).
Chess will never be solved, here's why

FLOPS (that is, Floating Point Operations Per Second) is the better measure, but any "financier" worth his salt would demand both numbers with supporting documentation. Number of chess positions evaluated per second is highly variable, and any snake oil salesman could and would present the most "simple" set of nodes as representative of the whole, while also pushing a limited and ultimately threadbare "evaluation" function...between these 2 fudges, you could knock of an order of magnitude. Which is miniscule in terms of solving chess, but enormous in terms of cost/benefit analysis.
Today, the fastest Supercomputer is 442 PetaFLOPS. It costs an estimated $1 billion dollars. Traversing the tree of chess positions and picking the next one, evaluating the position fully, and classifying/storing the result is *at best* in the 10s of operations, but much more likely in the low hundreds of operations. If you assume that fully evaluating a chess position takes 100 operations, then:
31,536,000 seconds in a year
442 PetaFLOPS = 442000000000000000 operations/second
442000000000000000 / 31,536,000 = 14015728056.8 operations/year
14015728056.8 operations/year / 100 operations per position evaluation = 140157280 positions/ year
10^44.5 positions to be evaluated (Tromp's number, best estimate currently available)
10^44.5 positions / 140157280 positions/year = 2.256235^36 years for the fastest supercomputer to weakly solve chess (almost guaranteed)
80 trillion = estimated worldwide wealth
$80 trillion / $1 billion = 80,000 supercomputers
2.8202937^31 years for 80,000 supercomputers to weakly solve chess (almost guaranteed)
Square root of 10^44.5 = 1.7782794^22
1.7782794^22 / 140157280 = 1.2687742^14 years for the fastest supercomputer to weakly solve chess (if the square root premise is proven to be sufficient, which is not currently the case)
1,585,967,750 years for 80,000 supercomputers to weakly solve chess (if the square root premise is proven to be sufficient, which is not currently the case, and if the entire human race is left to perish and die from lack of resources due the whole world's wealth being spent on the effort)
The *only* way Tygxc's premise works is to chop another 9 orders of magnitude off, *then* take the square root, *then* equate apples to oranges by implying that Stockfish/Sesse positions/second is equivalent to fully evaluating the position at a "tablebase solid" level of precision.
Now there's the 'real' post about this ...
1) "but any "financier" worth his salt would demand both numbers with supporting documentation. "
2) "Number of chess positions evaluated per second is highly variable,"
"highly variable"
See it? And this time its 'evaluated'.
Forum topic: 'solved'
earlier several times '10^9 nodes per second computer'
then grudgingly 'considered'
now we've got 'evaluated'
qualified by 'ultimately threadbare "evaluation" function'
that's got truth/accuracy/correctness/realism written all over it
its putting a very nasty interference into 'nodes per second'
a killer though - that puts a torpedo into 'nodes per second' - is 'variable'.
Another is - consider the first opening position - how many 'nodes' per second is that position being 'considered'? What is 'considered' anyway ?
Regarding taking the square root of the number - that's been discussed many times here. Its quite a cutdown.
The bigger the number you're looking at - the more it cuts down the number.
Which begins to discredit such a policy.
It implies that the more pieces you add to the position - the higher the ratio of irrelevant positions? But look at how much higher !
Try considering that for a second ... no pun intended
if there's a million positions being considered - then square root would mean 1000 positions. 999,000 positions knocked out ... by formula.
Only one thousandth of the positions left as 'the answer'.
But if we had a trillion positions and do the square root - a million ....
then we've only got a millionth of the positions left ?
Mathematically - yes.
But is that valid for 'solving' purposes. Even 'weak solving'?
Was @btickler 'generous' in saying a position could be 'fully evaluated' with 100 Ops ? (there might now be a lot of yelling from somebody else about 'floating point' Ops ...)
If a computer didn't have basic units of speed - if it didn't have discrete 'ops' its doing ... how would anybody ever build it? How would it even be designed? How could you sell the computer to anybody?
If its not doing ops then what is it doing?
Would you say "We don't know how fast or slow it is and we don't care. Try it yourself and find out."
#1853
No:
"Nodes per second (NPS) is the main measure of chess engines’ speed. It indicates the number of positions the engine considers in one second."...
Then the figure is never 10^9 for any engine / hardware combination.

There seems to also be a contradiction in place -
between 'generates' and 'considers'.
The term 'generates' would make Much more sense and fit much better with 'nodes per second' than 'considers'.
But 'generates' isn't making a single little Dent in 'solves' ...
not even a scratch on the paint. Not a ding.
Not even in 'weak solving'.
even though its much more valid-looking in this context - than 'considers'.
#1855
Tickler should know the difference between floating point operations, integer operations and boolean operations. Solving chess requires not 1 single floating point operation. Not 100 but 0.
Tickler's post is wrong on other counts as adhering to 10^44, while all randomly sampled positions found legal by Tromp contain multiple underpromotions to pieces not previously captured, which cannot occur in a reasonable game with reasonable moves and thus not in an ideal game with optimal moves.
"Regarding taking the square root of the number"
Weakly solving chess requires to consider less positions than strongly solving chess.
Losing Chess has been weakly solved considering 900,000,000 positions only.
#1856
Please see the reference article, that is what it states.
Make no mistake: those cloud engines of over 10^9 nodes / s are beasts.
They are 1000 x faster than a desktop, which typically reaches 10^6 nodes / s.
That is not excessive: those very fast cloud engines being 1000 times faster than a desktop.
#1855
Tickler should know the difference between floating point operations, integer operations and boolean operations. Solving chess requires not 1 single floating point operation. Not 100 but 0.
Are you saying that the operating system, with which all applications interact, or the language(s) or NNUE functions used in SF14, your main vehicle as I understand, don't use floating point instructions? Can you convincingly show that?
Just because you plan to code no floating point operations doesn't necessarily mean they won't happen.
The point about FLOPs is that computer manufacturers usually aim to balance speed between general, fixed point and floating point operations for usual applications of the type run in their prospective market. While computer speed is not a simple subject, FLOPs can usually be taken as a rough guide. No manufacturer quotes chess nodes per second.
Tickler's post is wrong on other counts as adhering to 10^44, while all randomly sampled positions found legal by Tromp contain multiple underpromotions to pieces not previously captured, which cannot occur in a reasonable game with reasonable moves and thus not in an ideal game with optimal moves.
@btickler is not wrong. You are wrong. Firstly, "reasonable" and occurring in a weak solution are not connected and, secondly, occurring in a weak solution and occurring in your proposed method of finding one are connected only in that the former is a miniscule subset of the latter.
"Regarding taking the square root of the number"
Weakly solving chess requires to consider less positions than strongly solving chess.
Losing Chess has been weakly solved considering 900,000,000 positions only.
Ergo, you can assume the square root???

More and more - the term 'nodes' might be proven to be entirely invalid in the context of the forum topic.
What is the speed of your computer in 'nodes' per second as they pertain to the single opening position of chess?
The speed is Zero. It cannot even do one node in a second or even in a year. On that position.
Suggestion: don't tell us about desktops.
You want boolean or integer.
Why don't you post at what speed the computer attacks any position in those operations per second ?
I don't think you're going to.
#1860
"The square root jump looks like a kind of 'cheating' to me."
Losing Chess has been solved considering only 900,000,000 positions that is not even the 4th root of 10^36.
"speeds of the computers in those units instead of jumping to 'nodes'?"
Because time to solve chess = number of positions to consider x time to consider 1 position

#1860
"The square root jump looks like a kind of 'cheating' to me."
Losing Chess has been solved considering only 900,000,000 positions that is not even the 4th root of 10^36.
"speeds of the computers in those units instead of jumping to 'nodes'?"
Because time to solve chess = number of positions to consider x time to consider 1 position
You aren't telling us the reason you refuse to post the speed of the computer in boolean or integer units.
That's becoming more obvious now.
And using a fourth root is even more blatant 'cheating' than using a square root.
#1861
My desktop considers around 10^6 nodes / s.
You can see the number on your desktop when you run an engine on it.
That a cloud engine is 1000 times faster should be no surprise.

Losing chess is an even poorer guide to difficulty than checkers because it has a winning strategy. I explained why this can be expected to provide a lower exponent.
Checkers is more directional than chess (it's like it only has pawns!) which is also likely to provide a lower exponent than chess.
So don't be confident of needing to evaluate fewer than (10^44)^(2/3) positions, say.

#1855
Tickler should know the difference between floating point operations, integer operations and boolean operations. Solving chess requires not 1 single floating point operation. Not 100 but 0.
Tickler's post is wrong on other counts as adhering to 10^44, while all randomly sampled positions found legal by Tromp contain multiple underpromotions to pieces not previously captured, which cannot occur in a reasonable game with reasonable moves and thus not in an ideal game with optimal moves.
@btickler is not wrong. You are wrong. Firstly, "reasonable" and occurring in a weak solution are not connected and, secondly, occurring in a weak solution and occurring in your proposed method of finding one are connected only in that the former is a miniscule subset of the latter.
"Regarding taking the square root of the number"
Weakly solving chess requires to consider less positions than strongly solving chess.
Losing Chess has been weakly solved considering 900,000,000 positions only.
Ergo you can assume the square root???
I don't think its necessary to even know that @MARattigan and @btickler are professional programmers with credentials and experience here -
because their posts are obviously more objective and informed and accurate and they don't seem to want to hide information.
Halving the length of the number of positions cuts that number down much much more than dividing it by 2 ...
and it looks like cheating to me.
Square root tends to do that. With a number like 9 you're only cutting it by two thirds by taking the square root.
But with a number like a trillion ... when you take a square root -
you're cutting it down to a millionth of what it was.
It may not look that bad - because the number of zeroes behind the '1' is still 6 instead of 12. But its 'bad' - real 'bad'.
Analogy: You're buying a new car.
Salesman: "The price of the car is $40,000."
Customer: "I want a discount. And I was born in the same town you were."
Salesman: "Okay - we'll charge you the square root of the price"
$200.
But whoever is pushing 'five years to weakly solve chess' while at the same time refusing to post how fast/slow the computers involved are in booleans/second or integers/second or FLOPS/second ...
maybe he doesn't know?
Why push if there's critical missing information?
Doesn't care? Then why push ?
Another piece of information missing - and this next one can't be blamed on anybody - how much computer time needed to solve just one position?
Mathematician: "Insufficient Data. What position are you talking about? There's a lot of variation. Putting it mildly."

You seem to be gradually arriving at what I was pointing out 1000 posts ago. Well done! Getting there fast!
Actually, this was an update to an argument I made in 2017. I just updated the FLOPS to 422 to reflect the fastest supercomputer of 2021. Glad you finally caught up, though .

I still think that the best way is games and not positions,
[snip]
You've just contradicted yourself, since @btickler surely disagrees with this.
Yep.
@tygxc
#1858 "Tickler's post is wrong on other counts as adhering to 10^44...found legal by Tromp..."
I have to concede @btickler didn't get it quite right in context.
The fact is, while you seem to be a little schizophrenic on the point, I think you are proposing to solve the competition rules game.
Tromp's bounds apply to the basic rules game.
In the competition rules game the relevant description of a position (and the one SF14 needs to function as designed) would be effectively a PGN starting with a FEN with a zero ply count (you can throw away venue, player names etc.).
To illustrate the difference, here I am playing myself in Arena and attempting to mate with a king and rook against a lone king.
but I'm not doing too well, because after five moves I find myself right back where I started.
But help is at hand because I have a copy of Wlhelm and the 3-5 man Nalimov tablebases, so I can ask them.
Nalimov says I should play Rh3, so I go back into Arena and do just that.
Dang! I could have sworn there was a win there somewhere.
Of course I'm looking at the wrong tablebase, because Nalimov is a tablebase for the basic rules game and Arena, as arbiter, has decided I'm going to play (a flawed version of) the competition rules game. But if I make my moves on the syzygy-tables.info site instead of in Wilhelm, in this particular endgame the Sysygy tablebase, which is designed to handle the competition rules game, will always give the same moves as Nalimov anyway.
The upshot is, for the competition rules game, the number of legal positions in this endgame is not the 49,389 assigned by Tromp in the basic rules endgame, but 49,389 x 100 x 3⁴⁹³⁸⁹ⁿ where the factor 100 accounts for the possible different ply counts and the last factor accounts for the number of times each diagram with the same material and side to play has already occurred (which can be 0, 1 or 2), n being a rather small fraction since many combinations will be impossible.
Positions with a repetition count of 2 will not occur in any weak solution and tablebase generation will avoid these. SF14 however, as you plan to use it, will happily walk into them and they therefore have to be (the major) part of your search space.
At any rate there may be more legal positions in this endgame alone in the competition rules game than there are in the whole of the basic rules game. I leave it to you to work out how many there are in the whole competition rules game.

#1855
Tickler should know the difference between floating point operations, integer operations and boolean operations. Solving chess requires not 1 single floating point operation. Not 100 but 0.
Tickler's post is wrong on other counts as adhering to 10^44, while all randomly sampled positions found legal by Tromp contain multiple underpromotions to pieces not previously captured, which cannot occur in a reasonable game with reasonable moves and thus not in an ideal game with optimal moves.
"Regarding taking the square root of the number"
Weakly solving chess requires to consider less positions than strongly solving chess.
Losing Chess has been weakly solved considering 900,000,000 positions only.
FLOPS is the standard measure of speed, regardless. Boolean and integer operations are not used much as a measure of processing speed, much in the same way that nobody cares what speed a car can achieve in 1st and 2nd gear.
Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework. I'm betting the difference is an order of magnitude, a couple at best.
"Nodes" is eminently fudge-able. FLOPS is not.
P.S. You gave up the ghost when you later said positions are "considered, not solved", really.

@tygxc
#1858 "Tickler's post is wrong on other counts as adhering to 10^44...found legal by Tromp..."
I have to concede @btickler didn't get it quite right in context.
Tromp's number is currently the best one. Any number that discards "excess" (as determined by human players' notions of chess) promotions, or arbitrates "reasonable positions" based on human-derived valuations of chess is garbage. If anyone can prove illegality of more positions, go for it. Until then, 10^44.5 is the number. Before Tromp's study, it was 10^46.7.
#1787
No: read number of positions = time in nanoseconds
When the engine considers 10^9 positions in one second, then it takes 10-9 seconds = 1 nanosecond per position it considers
The engine generates 10^9 positions in one second without considering them. Considering needs a little longer.