Human insanity ?
You mean normal human behaviour ?
Yes its very good as are the other 2 you mention,and an interesting ending.
Human insanity ?
You mean normal human behaviour ?
Yes its very good as are the other 2 you mention,and an interesting ending.
Here is the story of another inflection point - AIs that learn how to improve themselves. And then the improved versions improve themselves, and... you get the idea.
So, one can it appears programme self improvement / evolution ( of a kind ).
The question he does not look at is how self awreness/ consciousness fits into this.
Of course that would be rather difficult as we do not clearly understand how that arise in ourselves.
It appears to me that it is virtually inevitable, if it has not already arisen in some very basic form. ( the decepetive programme which apparenty attempted to stop itself being turned off)
The process he is describing appears to be virtually exponential ?
If not ,then someone will attempt to make that happen ..for the simple reason of what it could do for them / the country/ the party/ humanity.
To take the last point, Colossus was programmed to defend humanity, or at least part of it by reference to defence systems.
It decided there was a better way to ensure that.
He uses the word " Evolution" many times, clealry it is of a different form to that which old Charlie was referring to.
He also goes on about " how it can help us"
Sure it could
At least 2 problems
1 If it is till in "our control" ,who exactly will it help ?
2 The problem of consciousness/self awareness.. that opens a totally different set of doors.
Yes, the evolution is different, but has some similarity - changes are random in nature, but not uniformly random - they are selected based on the learnt probability distribution of next tokens that is the heart of every LLM.
So when learning, the AI tinkers with the program in a plausible looking way, tests to see if the tinkering is promising or detrimental, and iterates this process.
Machine learning has always been interesting, for me, at least. The fact that something could consistently and reliably defeat the world champion at his peak astonished me, but once I got to know how they worked (just the basics), I found I appreciated them more. There’s now a program called MuZero, which apparently learns games without knowing the rules, implying it finds out the rules and then proceeds to master the game. If I’m not wrong, chess has been a well-used stage to demonstrate advances in machine learning.
I didn't recall MuZero, but I recognised the possibility of this sort of design when I heard about previous DeepMind models. It's not difficult: you can start with very general rules for moving - eg a source and a destination square - and during training penalise illegal moves as losses.
This has a delightful simplicity. Eg for chess you have your 64 x 63 = 4032 outputs from the network with a softmax producing probabilities, and is probably only mildly inefficient (as long as the computation in the network to get to the previous layer is significantly more than that of the output layer).
There's an interesting idea which I am not sure has been explored for this domain (but it's a big world, so I would guess it had) - distillation. Given a very strong game playing network, you can train a much smaller network to predict the outputs of the strong network. This is similar to what DeepSeek is believed to have done (and other examples followed).
It is likely that distillation will become increasingly important for future AIs, because if you want to combine AIs and stack them, the computational demands are eventually going to get too much. Even more so for "Edge computing", where the AI is run locally because it is an application where cloud access cannot be relied on. You can only use relatively small AIs locally - models with a trillion parameters are way beyond practical.
As far as I can ascertain, the largest existing models have around 1.75 trillion parameters (Chinese researchers proposed a 174 trillion parameter model design using an exascale supercomputer, but it is unclear that this has progressed towards implementation. The costs would be very high, probably many billions of dollars. The designers were aware that the human brain has around 100 trillion connections - the strength of these connections being an analog of the parameters in a modern AI).
Thank you for sharing the idea of distillation, Elroch. I hadn’t known about an approach like that before, and I must say, I’m going to have to agree with your prediction. I’ve personally seen the effects of stacking multiple complex programs. Tried to create a calculator for a game that I was going to create. I wasn’t great at coding, so I consulted ChatGPT and my friend to help. As it turned out, simultaneously running several complex programs on your laptop tends to make it a bit slow.
It’s amazing how far AI has progressed. It’s possible that we might see a model that might match or possibly surpass the human brain in pure computing power. The costs do worry one and inhibit research. They’ve got to either come down, or some institute has to take a leap of faith.
The costs are actually quite acceptable. You get quite a lot of intelligent output for a few cents. As for "surpassing the human brain", you have to keep redefining that to make it not be true. An AI can't do everything a human does, but it does a lot of stuff a lot better than any human, and most stuff that it does better than a large majority of humans. It's not a case of humans "still being smarter" - that's too vague. As a poor analogy there is a lot we can't do that animals can.
It can't do the washing up or a long list of other things humans do without trouble, but the benchmarks more related to what we think of as intelligence keep getting ticked off. Say a current benchmark GPQA diamond. This benchmark is a selection of the hardest questions from "a collection of challenging multiple-choice questions in biology, physics, and chemistry, written by domain experts at PhD level or above, and designed to be very difficult for non-experts to answer, even with unrestricted internet access".
Example question:
<<A reaction of a liquid organic compound, which molecules consist of carbon and hydrogen atoms, is performed at 80 centigrade and 20 bar for 24 hours. In the proton nuclear magnetic resonance spectrum, the signals with the highest chemical shift of the reactant are replaced by a signal of the product that is observed about three to four units downfield. Compounds from which position in the periodic system of the elements, which are also used in the corresponding large-scale industrial process, have been mostly likely initially added in small amounts?
A) A metal compound from the fifth period.
B) A metal compound from the fifth period and a non-metal compound from the third period.
C) A metal compound from the fourth period.
D) A metal compound from the fourth period and a non-metal compound from the second period>>
Without a chemistry degree, I don't really understand the question, and it would probably take a while to fix that with google.
In this graph, it's interesting how good Claude 3.7 Sonnet and Deepseek R1 are. Click it for clarity.
New version of Tesla robot achieves AGD (that's Artificial General Dancing, a term I have just made up). The fine motor control is really impressive! (The link goes to an article with a really good short video at the top).
I think the answer is C. I vaguely recall a question similar to that in my cousin’s notes. And that’s quite a feat accomplished by Tesla. I guess we’ll be seeing Chucky soon enough. I think there’s an exam called Humanity’s Last Exam, where there are about 3000 or so questions in it composed by experts on many subjects. The fact that there’s an admittance process for the questions shows just how hard they are. I think ChatGPT’s latest model got about thirty percent. I wonder how well humans would do on that exam. Getting even a single question right at my age would be a feat.
Humanity's last exam is indeed a very high profile AI benchmark. It consists of questions aimed at experts in each of the many fields, beyond what might be expected of a graduate in the most relevant subject for a question. Top performance rose from 9% to 26.6% earlier this year, and the designers say 50% may be possible by the end of the year.
It is worth noting that the goalposts for AI performance to achieve artificial general intelligence seem to keep moving! The initial aspiration of Turing was to achieve an AI which was indistinguishable from a person by another person (since identification is never 100% reliable, this condition can be weakened to a group of other people being able to achieve better at 50% success at saying which of two agents is a non-human. In the last publication on such a test I saw, the humans were not only failing to distinguish the AI, they actually thought the human was more likely to be the AI (so the AI was better at convincing people it was human than humans were).
But now, these latest benchmarks are demanding post-doc levels of expertise in a wide range of hard subjects. So it's more a matter of a single AI being able to emulate a large number of exceptionally high performing humans.
A key extra enhancement is to include real world interaction. An interesting question at the moment is what proportion of jobs AIs will be able to do in the near future. For many of them, what you really need is a robot that you can instruct to perform general loosely formatted physical tasks - the mental jobs are the easy ones to crack, really.
But I don't see AIs as genuinely replacing humans in jobs in the way many people do. I see AIs (and robots) as tools wielded by people. As long as we have no wish to give AIs true agency - where their goals are not subservient to those of humans - It is the person controlling the AIs who is doing the job, by providing the goals.
It remains analogous to someone using a spinning jenny in the late 18th century. The person using such machines were able to do dozens of times as much work by the end of the century than those who had been spinning yarn by hand. It's not that the spinning jenny was competing with people, it's that people with spinning jennys were competing with people without them.
Which might therefore lead to people /countries/ companies with AI competing against those who do not.
One would like to imagine it would be AI for the benefit of all, regardless of who has it.
That does not appear to be the way we work in the maion, not so far anyway
Which might therefore lead to people /countries/ companies with AI competing against those who do not.
One would like to imagine it would be AI for the benefit of all, regardless of who has it.
That does not appear to be the way we work in the maion, not so far anyway
Resources that are that valuable are usually not used for the benefit of all. Humans are pretty ruthless, so if you let them eliminate the competition, they will probably do so. Honestly, whichever institute, government, company or person achieves AGI first will have a pretty good advantage. Considering the amount of knowledge humanity has accumulated, if given a good enough amount of data, it’ll be seriously potent. You could liken it to nuclear weapons. They can do a lot of damage, but AI can enhance as well. It’ll be a large advantage. Consider the impact classical analytics had in business. Amazon became a worldwide giant.
Which might therefore lead to people /countries/ companies with AI competing against those who do not.
But only in terms of international interactions. Country A having AI does not stop country B growing rice (or whatever).
One would like to imagine it would be AI for the benefit of all, regardless of who has it.
Nothing has ever been for the benefit of all, has it? But it is a good thing to aspire to.
That does not appear to be the way we work in the maion, not so far anyway
It's a movie I would enjoy, although getting very long in the tooth! Wargames was another I liked a lot. Even more so Dr. Strangelove with a different end (admittedly that was about human insanity rather than artificial intelligence).