it's wrong 100% of the time.
It has no concept of truth or falsity. it just throws associations into an output collection. its associative discrimination is equivalent to (and constrained to) the query/prompt, and Nothing it produces has value without informed human judgment.
LLMs are not good chess players.
But they are even worse as chess teachers. They are likely the worst teachers conceivable.

if u succeed, let me know how u did it
The deadline was over a year ago.
I could say that the guy failed miserably, but that would be the understatement of the century. He gave up much earlier, after not doing anything for 1 month.