AI Insights

Sort:
edutiisme
Elroch wrote:

Asked and answered.

Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.

So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?

If you say so

edutiisme

I will try that link, perhaps I will ask it where Nelsons fish and chip shop is.

In fact I will do exactly that

Elroch
edutiisme wrote:
Elroch wrote:

Asked and answered.

Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.

So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?

Answering the same question three times doesn't really serve a purpose.

If you say so

Yes.

Elroch
edutiisme wrote:

I will try that link, perhaps I will ask it where Nelsons fish and chip shop is.

In fact I will do exactly that

Do! You will find an annoyingly inappropriate response, I am sure.

edutiisme

Yes, I want to throttle that one now, but for a different reason

edutiisme
Elroch wrote:
edutiisme wrote:
Elroch wrote:

Asked and answered.

Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.

So Eliza is so nonsensical, that when answering chatgpt, it refers to itself instead of chatgpt ?

Answering the same question three times doesn't really serve a purpose.

If you say so

Yes.

Then on this one we must agree to disagree

Elroch

A foolish mistake, given that there is no doubt about the facts and I have explained this.

To analyse the reason for your error:

  1. you saw the bot discussion and guess the participants must be reasonably intelligent and not say stupid things
  2. you then ignore every piece of relevant information, specifically that I know the answer for certain, and I explained why Eliza produces nonsensical outputs sometimes
  3. you have ignored the proof that is visible in the OP, which contains screen grabs from the ChatGPT side of the conversation. And if you were familiar with ChatGPT you would know that the other party always has their posts on the right. (My guess is this point did not register).
  4. Having failed to recognise that relevant information is better than a guess, you have failed to bother to experiment yourself to easily show that Eliza will output similar nonsense for you.

Is there anything to be learnt? The way you stuck to a poorly founded guess is a pattern that could be avoided. It is at odds with objectivity and gets poor results. It also gets this response, which would have been worth avoiding, IMHO.

edutiisme
Elroch wrote:

A foolish mistake, given that there is no doubt about the facts and I have explained this.

To analyse the reason for your error:

"No doubt" ? "My error" Sounds very confident, doesnt it ?

  1. you saw the bot discussion and guess the participants must be reasonably intelligent and not say stupid things . Its rather diffcult to see this comment as anything other than mind reading. So you think I saw the conversation and imagined that they could not say stupid things, I think it is pretty obvious that they can. I pointed out a while ago how chatgpts comments about Eliza being mirror like were rather stupid..did I not ? Please feel free to read back and see for yourself. Mindreading is not normally a good form of evidence,is it ? Or a basis on which to begin an argument.
  2. you then ignore every piece of relevant information, specifically that I know the answer for certain, and I explained why Eliza produces nonsensical outputs sometimes. I am not and never have suggested that Eliza was incapable of stupid responses, would you are to quote me where I specifically made that claim ? I am not aware that you, or anyone else simply stating " I know the answer for certain" is any better. Do you simply accept it if someone says that...automatically ..without question ?? Are you having a laugh here ?
  3. you have ignored the proof that is visible in the OP, which contains screen grabs from the ChatGPT side of the conversation. And if you were familiar with ChatGPT you would know that the other party always has their posts on the right. (My guess is this point did not register). It is not that difficult a concept to grasp ,even for a numbskull such as myself. More mindreading ? You have a new career ahead of you Elroch !
  4. Having failed to recognise that relevant information is better than a guess, you have failed to bother to experiment yourself to easily show that Eliza will output similar nonsense for you. I have done exactly that, otherwise why would I say above that I wanted to throttle it as well ?

Is there anything to be learnt? The way you stuck to a poorly founded guess is a pattern that could be avoided. It is at odds with objectivity and gets poor results. It also gets this response, which would have been worth avoiding, IMHO. Shall I restrain from giving you the response that comment fully deserves ? For the moment yes. You do realise that the commentt by myself " Then on this one we must agree to disagree" was in fact trying to end this and avoid this kind of stuff ??

Elroch
edutiisme wrote:
Elroch wrote:

A foolish mistake, given that there is no doubt about the facts and I have explained this.

To analyse the reason for your error:

"No doubt" ? "My error" Sounds very confident, doesnt it ?

Yes. You say that as if you are unfamiliar with the situation of having access to the precise facts about something and being 100% sure about them, It's should be a common situation.

I have just remembered I can provide a link to the actual dialog (you have seen the screenshots of it in the OP).

Tamer

Intuition is a calculus unaccounted for here.

Flow3rb0y
I don’t know a SINGLE person in this thread but tamer I love your pfp
Flow3rb0y
Idk why
AG120502
Tamer wrote:

Intuition is a calculus unaccounted for here.

Intuition has it’s place, but in reasoning, one cannot rely upon it. Building intuition into AI will be a monumental task, perhaps impossible. In humans, it’s there because automating certain cognitive processes allows one to dedicate time to more complex problems, in order to fully analyse them. Of course, it’s just guessing, but with a reasonably high chance of success, which is why it develops in the first place. Basically, intuition (which is a system 1 process), allows one to utilise system 2 processes on the right issues, enabling efficient allocation of resources. Please forgive me if I have mixed up certain things in this, because I don’t get much time to type my posts out and lack information, also being of a comparatively younger age.

AG120502

And I can’t quite understand what you disagree with, edutiisme.

edutiisme

Plenty of things, for example do you just accept it as a fact if someone says to you they know something for sure ??

Good luck if you do.

This conversation is bizarre, it rem inds me of chats myself and Elroch had in the distant past, but with the roles reversed.

In those day,I was the rude arrogant one....things change eh ?

Elroch

An oddly distorted claim.

I posted some information in the first post that is very clear to anyone familiar with the look of a ChatGPT dialog (and a format that is easily understood by anyone else). I explained that ChatGPT comments were on the left and Eliza on the right - anything else would have required deliberate mispresentation.

Then I pointed out that some of the Eliza posts did not make sense, as a result of its crude programming. (Second paragraph, first post). I even explained later that it was a few hundred lines of code, the original version written in the 1960s, and had no real understanding of what it was receiving or writing.

After all that, you expressed incredulity that a crude program could construct output that was nonsensical. This guess was based, I would guess, on intuitive familiarity with other much more intelligent agents (such as humans) communicating.

I explained to you that there was no doubt - I had watched the output being generated and had reported it honestly. The nonsensical output was 100% Eliza. I reiterated why it generated nonsense output sometimes. I explained that you already had the answer to that question, and that a little investment of your own time could verify the point very easily. I provided a link to an implementation of Eliza.

Then YOU became arrogant, refusing to believe this, asking the same question about where the output had come from two more times.

You persisted, so I found and posted the link to the original bot dialog, confirming something that, frankly, needed no more confirmation.

No, it is not arrogant to correct a wrong guess and then to help someone fix it by a little effort or looking at the facts themself. What you are confusing with arrogance is that I stated simple facts directly (without impoliteness) that happened to contradict your guess.

[See bold text for hints of how you could have gracefully changed direction]

AG120502

Based on what I can see, I’m going to have to agree with Elroch. Even if you choose to not believe his statement that ChatGPT’s comments were on the left and Eliza’s were on the right, one can easily infer so based on the first few comments. Eliza’s comments were most certainly nonsensical, just repeating what ChatGPT told it, resulting in very bad grammar, as well as making a number of other mistakes, like referring to its patient using its own name.

edutiisme

Well, as I said a while ago...we can agree to disagree on that

You both know what that in essence means..I hope ?

AG120502
edutiisme wrote:

Well, as I said a while ago...we can agree to disagree on that

You both know what that in essence means..I hope ?

I suppose we should move on.

Tamer
AG120502 wrote:
Tamer wrote:

Intuition is a calculus unaccounted for here.

Intuition has it’s place, but in reasoning, one cannot rely upon it. Building intuition into AI will be a monumental task, perhaps impossible. In humans, it’s there because automating certain cognitive processes allows one to dedicate time to more complex problems, in order to fully analyse them. Of course, it’s just guessing, but with a reasonably high chance of success, which is why it develops in the first place. Basically, intuition (which is a system 1 process), allows one to utilise system 2 processes on the right issues, enabling efficient allocation of resources. Please forgive me if I have mixed up certain things in this, because I don’t get much time to type my posts out and lack information, also being of a comparatively younger age.

Intuition may be inconvenient, but it does exist, and is a natural part of the biological programing matrix. Moving on..

When I hear terms like, 'left' or 'right' infecting A.I. programing, I don't see ethics, or the practice of law as a reliable replacement for human beings just yet.