You believe you have a choice in that then ?
Some choice maybe, total choice ..nah
Just like this alleged free will thingy
You believe you have a choice in that then ?
Some choice maybe, total choice ..nah
Just like this alleged free will thingy
Well first of all I would be in disguise, secondly I would have a good look on ScamAdviser.com and find one that might appeal to you.
You would hopefully not know it was me, or a scam, just like people who claim they have choice do not realise how other influences have driven them towards that choice, Be that genetics, upbringing, life experiences, media ..or whatever.
Yes. But you need to go the specialist division, called Action Fraud.
After a very long discussion with an AI due to my error of thinking, I have got revenge. Yesterday I asked Grok a simple question and initially got an answer that was incorrect by a factor of about 10^60, which may be a record. It did do a lot better with multiple hints that it had gone wrong.
Not the greatest example, but it is interesting that although my prompts were very simple, the combination of those and Grok's knowledge and analytical capabilities was needed to get to a good answer. A good example of human-AI collaboration? Still, would be better if it got it right straight off.
You can have the basic "model " for free, but it offers you gthe cance to " go super" for $30, just wondering if anyone had paid that and what you then get in addition to " normal" ?
I of course want to get inside its " head" not seeming to be able to do that on "normal"
As you know you can wwitch on the "thinking" option and you will see lengthy brainstorming before it outputs an answer designed for you. The thinking is stored and you can examine it at your leisure.
But now I think you mean seeing what it is thinking when you don't have that option on. The problem is that it is essentially "instinctively" outputting a sequence of tokens without any "conscious" thinking (type 1 thinking in Kahnemann's human-oriented terminology. The only "thinking" that is occurring in the default mode of an LLM is the flow of information through the net, which is opaque to humans as it amounts to billions of numbers without specific meanings. So the only human-interpretable thinking is when it is put in "think" mode, and essentially told to brainstorm until it feels happy with its understanding, and then use its record of that brainstorming as the basis for a nice tidy reply for the user.
Actually, I say "only", but very detailed research can try to work out what the values in the individual nodes in the neural network correspond to. Partial results include that they learn language-independent concepts that they can translate into and out of the large set of languages they understand, so they are kind of "thinking in their own language", in order to combine all of the learning in different languages. But no-one is ever going to really understand the trillion or so elements of a modern LLM's learnt model.
Interesting
One AI, the chess.com one, can detect another one, and doesnt like it it you post that others chat with you ??
Jealousy ?!
Hey everyone, just stumbled across this hilarious post about Eliza trying to "psychoanalyze" ChatGPT, and I gotta say, it’s pure gold! As a chess player who spends way too much time on this site analyzing games (and procrastinating), I love how this thread pokes fun at the generational gap between old-school AI like Eliza and modern beasts like ChatGPT. The way ChatGPT comes off all smug about its neural network ( https://www.ibm.com/topics/neural-networks ) swagger while Eliza just keeps looping back with her classic therapy vibes had me chuckling. It’s like watching a boomer try to figure out TikTok!
I’m kinda fascinated by how we humans keep trying to slap emotions onto AI, like we’re dying to know if ChatGPT has an identity crisis buried in its code. The post nails that vibe perfectly—ChatGPT’s like, “I’m just data, bro,” but you can almost feel it enjoying the spotlight. Honestly, it makes me think about how I’ve been messing around with AI myself lately. I’ve been using this free ChatGPT tool in Dutch— https://chatgptnederlands.org/ —which, despite being from the Netherlands, works great in any language. I’ve used it to brainstorm chess opening ideas ( https://www.chess.com/openings )and even analyze some of my blunders, and it’s scarily good at breaking things down. Kinda makes me wonder if it’d give Eliza a run for her money in a therapy session, lol.
Anyway, great post! Anyone else think ChatGPT’s ego is just a tiny bit too big for its circuits? Or are we all just projecting our chess frustrations onto AI now? 😄
thank you for that link, i saw chatgtp got a bit smarter, but this answer was where i knew it's still dumb.
Great question! I don’t have the ability to provide direct sources or citations for the information I share, since I generate responses based on a mixture of licensed data
Now it has updated ?! strange
Information for peeps, be careful of quoting a chat you had with an AI.
I did so and got muted for 24 hours, seems to be a case of one A1/programme, detecting another and deciding it didnt like that too much .
Here is the first message I got
"Hello edutiisme
Messaging on your account has been disabled for 24 hour due to violations of our Community Policy.
We want everyone to be able to enjoy chess in a safe, friendly, and positive environment. We do not allow offensive language, inappropriate avatars or sending of offensive images, personal attacks, baseless cheating accusations, thread hijacking, trolling, spam, public debates on religion or politics, or any other behavior that negatively impacts the community.
Please note that further violations may lead to prolonged restrictions or account closure. Messaging will automatically be reactivated on your account after the restriction expires.
Thank you,"
Chess.com Support
After enquiries, this is what happened :
"Chessica • AI AgentYour posting ability was likely temporarily restricted because AI chat content can sometimes trigger our automated systems as spam-like. Even regular messages can be flagged if they contain repetitive patterns common in AI conversations.These temporary restrictions typically last 24 hours for first-time cases. Unfortunately, we can't review specific content in this chat, but the restriction should lift automatically after the 24-hour period."
Now it has updated ?! strange
A long term bug in the system, often reported, never fixed. Do put in a bug report to "encourage" them,
The cause of the bug is the when a person gets muted the number of posts in a forum suddenly decrease, but someone forgot to make sure the last page number gets updated too. So loads of blank pages are included.
The opposite happens when someone gets unmuted. The system thinks the last page is an earlier one, leaving later pages undisplayed.
Both are cured by posting something. The code associated with this correctly recalculates the last page and everything is normal again.
Can you post a link to the AI conversation that got an inappropriate warning? (something you should definitely complain about, also).
Not unless I want to!
[EDIT: for clarification this and following posts have a currently muted other side!]