I will try that link, perhaps I will ask it where Nelsons fish and chip shop is.
In fact I will do exactly that
I will try that link, perhaps I will ask it where Nelsons fish and chip shop is.
In fact I will do exactly that
Asked and answered.
Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.
So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?
Answering the same question three times doesn't really serve a purpose.
If you say so
Yes.
I will try that link, perhaps I will ask it where Nelsons fish and chip shop is.
In fact I will do exactly that
Do! You will find an annoyingly inappropriate response, I am sure.
Asked and answered.
Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.
So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?
Answering the same question three times doesn't really serve a purpose.
If you say so
Yes.
Then on this one we must agree to disagree
A foolish mistake, given that there is no doubt about the facts and I have explained this.
To analyse the reason for your error:
Is there anything to be learnt? The way you stuck to a poorly founded guess is a pattern that could be avoided. It is at odds with objectivity and gets poor results. It also gets this response, which would have been worth avoiding, IMHO.
A foolish mistake, given that there is no doubt about the facts and I have explained this.
To analyse the reason for your error:
"No doubt" ? "My error" Sounds very confident, doesnt it ?
Is there anything to be learnt? The way you stuck to a poorly founded guess is a pattern that could be avoided. It is at odds with objectivity and gets poor results. It also gets this response, which would have been worth avoiding, IMHO. Shall I restrain from giving you the response that comment fully deserves ? For the moment yes. You do realise that the commentt by myself " Then on this one we must agree to disagree" was in fact trying to end this and avoid this kind of stuff ??
A foolish mistake, given that there is no doubt about the facts and I have explained this.
To analyse the reason for your error:
"No doubt" ? "My error" Sounds very confident, doesnt it ?
Yes. You say that as if you are unfamiliar with the situation of having access to the precise facts about something and being 100% sure about them, It's should be a common situation.
I have just remembered I can provide a link to the actual dialog (you have seen the screenshots of it in the OP).
Intuition is a calculus unaccounted for here.
Intuition has it’s place, but in reasoning, one cannot rely upon it. Building intuition into AI will be a monumental task, perhaps impossible. In humans, it’s there because automating certain cognitive processes allows one to dedicate time to more complex problems, in order to fully analyse them. Of course, it’s just guessing, but with a reasonably high chance of success, which is why it develops in the first place. Basically, intuition (which is a system 1 process), allows one to utilise system 2 processes on the right issues, enabling efficient allocation of resources. Please forgive me if I have mixed up certain things in this, because I don’t get much time to type my posts out and lack information, also being of a comparatively younger age.
Plenty of things, for example do you just accept it as a fact if someone says to you they know something for sure ??
Good luck if you do.
This conversation is bizarre, it rem inds me of chats myself and Elroch had in the distant past, but with the roles reversed.
In those day,I was the rude arrogant one....things change eh ?
An oddly distorted claim.
I posted some information in the first post that is very clear to anyone familiar with the look of a ChatGPT dialog (and a format that is easily understood by anyone else). I explained that ChatGPT comments were on the left and Eliza on the right - anything else would have required deliberate mispresentation.
Then I pointed out that some of the Eliza posts did not make sense, as a result of its crude programming. (Second paragraph, first post). I even explained later that it was a few hundred lines of code, the original version written in the 1960s, and had no real understanding of what it was receiving or writing.
After all that, you expressed incredulity that a crude program could construct output that was nonsensical. This guess was based, I would guess, on intuitive familiarity with other much more intelligent agents (such as humans) communicating.
I explained to you that there was no doubt - I had watched the output being generated and had reported it honestly. The nonsensical output was 100% Eliza. I reiterated why it generated nonsense output sometimes. I explained that you already had the answer to that question, and that a little investment of your own time could verify the point very easily. I provided a link to an implementation of Eliza.
Then YOU became arrogant, refusing to believe this, asking the same question about where the output had come from two more times.
You persisted, so I found and posted the link to the original bot dialog, confirming something that, frankly, needed no more confirmation.
No, it is not arrogant to correct a wrong guess and then to help someone fix it by a little effort or looking at the facts themself. What you are confusing with arrogance is that I stated simple facts directly (without impoliteness) that happened to contradict your guess.
[See bold text for hints of how you could have gracefully changed direction]
Based on what I can see, I’m going to have to agree with Elroch. Even if you choose to not believe his statement that ChatGPT’s comments were on the left and Eliza’s were on the right, one can easily infer so based on the first few comments. Eliza’s comments were most certainly nonsensical, just repeating what ChatGPT told it, resulting in very bad grammar, as well as making a number of other mistakes, like referring to its patient using its own name.
Well, as I said a while ago...we can agree to disagree on that
You both know what that in essence means..I hope ?
Well, as I said a while ago...we can agree to disagree on that
You both know what that in essence means..I hope ?
I suppose we should move on.
Intuition is a calculus unaccounted for here.
Intuition has it’s place, but in reasoning, one cannot rely upon it. Building intuition into AI will be a monumental task, perhaps impossible. In humans, it’s there because automating certain cognitive processes allows one to dedicate time to more complex problems, in order to fully analyse them. Of course, it’s just guessing, but with a reasonably high chance of success, which is why it develops in the first place. Basically, intuition (which is a system 1 process), allows one to utilise system 2 processes on the right issues, enabling efficient allocation of resources. Please forgive me if I have mixed up certain things in this, because I don’t get much time to type my posts out and lack information, also being of a comparatively younger age.
Intuition may be inconvenient, but it does exist, and is a natural part of the biological programing matrix. Moving on..
When I hear terms like, 'left' or 'right' infecting A.I. programing, I don't see ethics, or the practice of law as a reliable replacement for human beings just yet.
Asked and answered.
Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.
So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?
If you say so