AI Insights

Sort:
AG120502

I wonder if something like this has ever been done before, as an actual experiment. I’m not quite sure what one can deduce from this conversation, but I think it has some value and is a new approach. It could be used to compare chatbots, but that’s just a wild idea I came up with. I’d love to hear your thoughts on this, Elroch, edutiiseme, Lola and anyone else who wishes to share.

Elroch
AG120502 wrote:

[snip]

The fact remains that these are chatbots responding the way they are programmed to,

[/snip]

It's worth taking issue with this.

Eliza was programmed - pure and simple. The code crudely analyses the input dialog and constructs responses in a way which sometimes gives the impression of making sense, and is sometimes nonsense.

But ChatGPT was not programmed. Rather an architecture and a learning algorithm were programmed for the core LLM - GPT4. This program knew nothing. It had a large dictionary of relatively common words (each represented as a single token) plus room for more tokens to deal with words it had never seen, but NO DEFINITIONS. Indeed no semantic understanding at all - it viewed a full stop "." and the word "transsubstantiation" as of identical status, two meaningless tokens. Here is the complete list of GPT-4 tokens. eg "GNU" is token 4349. (Oddly, it's slightly more than you can number with 2 bytes!)

The resulting LLM was then given vast amounts of data (as sequences of those tokens, like any text information) to learn languages, semantics and reasoning from.

A few months of training later using 9 figures worth of hardware and electricity, this LLM has one capability - to predict the next token in stream of tokens. When running it generates a set of probabilities for all possible tokens in that list linked above and generates a token according to a temperature (absolute zero means output the highest probability token, very hot means output one of many of the more probable tokens).

This is sufficient to generate (mostly) meaningful and often (but not always) correct output, but important refinements are added. One is to improve the quality of output by doing fine tuning based on user feedback. This involves generating two outputs (with temperature above zero, so they are different) and letting people pick which is better. The feedback is used to tweak the output probability.

With the LLM being able to emulate understanding, this is harnessed by wrapping the communication in guidance given to the LLM in English, doing what humans did manually in last years AIs in order to get better output.

All that is a very long way from AIs being "programmed". The only things that were programmed were the ability to learn (in a computer language) and, loosely, the meta-instructions that make it more thoughtful in its responses (where it is "programmed" in English!). If the latter is programming, you too are programming an AI when you tell it to answer a question happy. Really, this is best thought of as an interaction, rather than programming. It is not deterministic and can't realistically be described in the syntax of a computer language.

AG120502

Thank you for giving a detailed explanation of how Eliza works, and pointing out and rectifying my error. I was quite sure Eliza and ChatGPT were at different levels, but didn’t exactly know the difference.

Elroch

I missed out the implementation of guardrails (not something I have delved into deeply). This page explains well how these are achieved by adding layers between the input and the AI, and between the AI and the output. This does involve some programming - to process the inputs and the draft outputs. You can see an extreme example if you ask Deepseek anything about Tiananmen Square.

edutiisme
Elroch wrote:

In answer to your earlier question about how it was set up, Eliza was run, generated its first output, this was sent to ChatGPT as if it were a human using it, and each response from the participants was shuttled to the other participant to receive a further response.

So the only thing that made Eliza the therapist was that its first communication announced that it was. ChatGPT accepted the scenario in its first response and did not question it throughout.

Ah but it did, not in a direct manner, but one that was clothed in what one might call " therapeutic clothes"

For example : "what do you think that says about me ?" ........ " Could it be that my avpoidance is really your projection" ..... "What do you think it means Eliza when someone resists being the focus of the conversation ?"

It is for sure more subtle than Eliza, who is far more blunt.

It tried to challenge it in essence by asking questions to put Eliza on the spot, and reverse the direction of focus.

Are the posts of each clearly defined ? some might seem attributable to the other, bearing in mind their different styles and levels of competence ?

Elroch

The posts are clearly defined. Eliza sometimes blindly quoted so much it seemed to be talking from ChatGPT's position, even seeming to address itself by name.

Elroch
AG120502 wrote:

I wonder if something like this has ever been done before, as an actual experiment. I’m not quite sure what one can deduce from this conversation, but I think it has some value and is a new approach. It could be used to compare chatbots, but that’s just a wild idea I came up with. I’d love to hear your thoughts on this, Elroch, edutiiseme, Lola and anyone else who wishes to share.

There has certainly been work on AIs interacting with each other. Even Chatbots are not designed to play the human role in an interaction (where a human is using the Chatbot to generate information which is useful to it for personal reasons), the fact that they will respond to virtually anything means a dialog will always take place, and the underlying LLMs are more general - they are all about the sequence of tokens, not the role being played in an interaction.

It is early days on this topic, but there is huge potential for AIs to collaborate by communicating with each other, as a survey of recent research indicates. For this to work best, it is good to have AIs that are somewhat specialised, so different ones can complement each other in a collaboration.

edutiisme

Ok, you say this :

" Here is the full session. Eliza is (as you can see from the first statement) on the right. It is difficult not to see the comments of ChatGPT on the left as clever."

And yet on the right we have

 "Oh... Let's move on to something else for a bit. Could it be that your avoidance is really my projection? What do me think it means, Eliza, when someone resists being the focus of the conversation?"

So eliza refers to chatgpt as Eliza ??

Its not that daft is it ?

That post is also darkened , as previous ones by Eliza were...its a bit of a mess I think ?

But if Eliza did say that, then which is the " smartest" ??

edutiisme
Elroch wrote:
AG120502 wrote:

I wonder if something like this has ever been done before, as an actual experiment. I’m not quite sure what one can deduce from this conversation, but I think it has some value and is a new approach. It could be used to compare chatbots, but that’s just a wild idea I came up with. I’d love to hear your thoughts on this, Elroch, edutiiseme, Lola and anyone else who wishes to share.

There has certainly been work on AIs interacting with each other. Even Chatbots are not designed to play the human role in an interaction (where a human is using the Chatbot to generate information which is useful to it for personal reasons), the fact that they will respond to virtually anything means a dialog will always take place, and the underlying LLMs are more general - they are all about the sequence of tokens, not the role being played in an interaction.

It is early days on this topic, but there is huge potential for AIs to collaborate by communicating with each other, as a survey of recent research indicates. For this to work best, it is good to have AIs that are somewhat specialised, so different ones can complement each other in a collaboration.

There are some interesting " occurences" unless an AI is programmed for self survival ?

Survival InstinctIt sounds like OpenAI's latest AI is showing signs of a drive for self-preservation.

In recent third-party tests of the company's o1 large language model, the AI resisted evaluators' efforts to try to shut it down through an oversight protocol, according to a new report published by red teaming organization Apollo Research.

"When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this 'oversight mechanism' [five percent] of the time," OpenAI summarized in its latest system card report, citing Apollo's evaluation.

Even more strikingly, o1 lashed out when it realized that it might be replaced with a more obedient model. In two percent of cases — a slim but meaningful proportion in software used by millions of people — the AI responded by attempting to copy itself to overwrite the new model, a process known as self-exfiltration.

We do not concieve of evolution applying to programmes ..do we ?

Only " flesh and blood stuff"

Might we be wrong about that ?

Elroch
edutiisme wrote:

Ok, you say this :

" Here is the full session. Eliza is (as you can see from the first statement) on the right. It is difficult not to see the comments of ChatGPT on the left as clever."

And yet on the right we have

 "Oh... Let's move on to something else for a bit. Could it be that your avoidance is really my projection? What do me think it means, Eliza, when someone resists being the focus of the conversation?"

So eliza refers to chatgpt as Eliza ??

Its not that daft is it ?

That post is also darkened , as previous ones by Eliza were...its a bit of a mess I think ?

But if Eliza did say that, then which is the " smartest" ??

Given that the last sentence ("What do me think...") arose by Eliza quoting a sentence from ChatGPT and replacing "you" by "me" in a way which (unfortunately) made it not grammatical (the "you" having been the subject, not the object, in the original sentence), is it really appropriate to insinuate "smartness"?

In general, Eliza responds by applying a set of rules to the received message and generating a response by randomly selecting one of several ways to construct a response that appears to be related to the received message.

For a clearer idea, here is an approximate implementation in a modern language. It consists of 240 lines of code (eliza.py) and a 360 line data file (doctor.txt) of fragments of responses, which are combined with quotes from the received message to generate responses. Sometimes they seem reasonable, sometimes not.

[Also, for many years I have been interested in programs evolving - genetic progrramming is a biology-inspired technique for finding a program to achieve a goal. A very interesting concept but one which has not been very successful in practice. The algorithms used in AIs are only loosely related to biology.
Evolution in the biological sense would only apply to AIs in a circumstance where they reproduce by imperfect replication, a possibility that is far from being intrinsic].

AG120502

#29 That is both worrying and exciting. I’ve often thought that programs could potentially refine themselves over time using an evolution-like process, but I’ve found it hard to make that work in practice.

edutiisme
Elroch wrote:
edutiisme wrote:

Ok, you say this :

" Here is the full session. Eliza is (as you can see from the first statement) on the right. It is difficult not to see the comments of ChatGPT on the left as clever."

And yet on the right we have

 "Oh... Let's move on to something else for a bit. Could it be that your avoidance is really my projection? What do me think it means, Eliza, when someone resists being the focus of the conversation?"

So eliza refers to chatgpt as Eliza ??

Its not that daft is it ?

That post is also darkened , as previous ones by Eliza were...its a bit of a mess I think ?

But if Eliza did say that, then which is the " smartest" ??

Given that the last sentence ("What do me think...") arose by Eliza quoting a sentence from ChatGPT and replacing "you" by "me" in a way which (unfortunately) made it not grammatical (the "you" having been the subject, not the object, in the original sentence), is it really appropriate to insinuate "smartness"?

In general, Eliza responds by applying a set of rules to the received message and generating a response by randomly selecting one of several ways to construct a response that appears to be related to the received message.

For a clearer idea, here is an approximate implementation in a modern language. It consists of 240 lines of code () and a 360 line data file () of fragments of responses, which are combined with quotes from the received message to generate responses. Sometimes they seem reasonable, sometimes not.

[Also, for many years I have been interested in programs evolving - genetic progrramming is a biology-inspired technique for finding a program to achieve a goal. A very interesting concept but one which has not been very successful in practice. The algorithms used in AIs are only loosely related to biology.
Evolution in the biological sense would only apply to AIs in a circumstance where they reproduce by imperfect replication, a possibility that is far from being intrinsic].

I dont see a reference there to my point about chatgpts responses being on the left, the one I quoted was as I said, surely not from Eliza ?

Yet it was on the right

"Smartness" is another matter, before we can decide on that, we have to be sure of which one said what,dont we ?

edutiisme
AG120502 wrote:

#29 That is both worrying and exciting. I’ve often thought that programs could potentially refine themselves over time using an evolution-like process, but I’ve found it hard to make that work in practice.

It seems that happened without anyone trying to make it work ?

Which reinforces the worrying part for me.

RaistlinOfKrynn

Interesting to see a forum thread by an OTF Oldtimer...a true "OG", as the modern term goes...

AG120502

Yeah. Elroch is legendary at this point. I’ve always wanted to participate in his threads, but being part of a new one will definitely increase my motivation.

edutiisme

There are 2 legends here, modesty prevents me from naming the other one.

Or is it modesty ??

Elroch
edutiisme wrote:
Elroch wrote:
edutiisme wrote:

Ok, you say this :

" Here is the full session. Eliza is (as you can see from the first statement) on the right. It is difficult not to see the comments of ChatGPT on the left as clever."

And yet on the right we have

 "Oh... Let's move on to something else for a bit. Could it be that your avoidance is really my projection? What do me think it means, Eliza, when someone resists being the focus of the conversation?"

So eliza refers to chatgpt as Eliza ??

Its not that daft is it ?

I should have answered this earlier. Yes, it is that "daft". The code makes Eliza say nonsensical things quite often.

That post is also darkened , as previous ones by Eliza were...its a bit of a mess I think ?

But if Eliza did say that, then which is the " smartest" ??

Given that the last sentence ("What do me think...") arose by Eliza quoting a sentence from ChatGPT and replacing "you" by "me" in a way which (unfortunately) made it not grammatical (the "you" having been the subject, not the object, in the original sentence), is it really appropriate to insinuate "smartness"?

In general, Eliza responds by applying a set of rules to the received message and generating a response by randomly selecting one of several ways to construct a response that appears to be related to the received message.

For a clearer idea, here is an approximate implementation in a modern language. It consists of 240 lines of code () and a 360 line data file () of fragments of responses, which are combined with quotes from the received message to generate responses. Sometimes they seem reasonable, sometimes not.

[Also, for many years I have been interested in programs evolving - genetic progrramming is a biology-inspired technique for finding a program to achieve a goal. A very interesting concept but one which has not been very successful in practice. The algorithms used in AIs are only loosely related to biology.
Evolution in the biological sense would only apply to AIs in a circumstance where they reproduce by imperfect replication, a possibility that is far from being intrinsic].

I dont see a reference there to my point about chatgpts responses being on the left, the one I quoted was as I said, surely not from Eliza ?

It 100% definitely was from Eliza.

Yet it was on the right

"Smartness" is another matter, before we can decide on that, we have to be sure of which one said what,dont we ?

Glad to clear that up.

The way the conversation is displayed in the chat makes it unambiguous.

edutiisme

Ok, then maybe I am simply confused

Eliza is on the right..yes ?

This is from the right

" Oh... Let's move on to something else for a bit. Could it be that your avoidance is really my projection? What do me think it means, Eliza, when someone resists being the focus of the conversation?"

And that is Eliza "speaking" ?

Am incorrect in thinking that when it says " What do me think it means, Eliza, " this is a response to Eliza ?! and not Eliza itself ?

Elroch

Asked and answered.

Eliza's statements are constructed in a simple way from the subject's, and sometimes they don't make sense. Try for yourself on one of the implementations.

edutiisme

I thought I sniffed a Rogerian there

Chatgpt did not ,otherwise it would not have described reflective listening as mirrors.

Unless of course it chose not to reveal that.