https://www.youtube.com/watch?v=nud2TQNahaU
Christmas wrapping
Not technically a "future technology" but this has a "WOW!" factor that makes this seem a good place to put it.
WSPR is an extraordinary ad hoc technology that uses data on the strength of amateur radio signals to detect planes - effectively radar using normal transmitters and receivers rather than dedicated equipment. In recent years, it has been proposed that this technology be used on 2014 data to track the still missing Malaysian Airways 777, MH370, and find where it fell into the Indian Ocean. This has now been done and a new search is about to start on a "no find, no fee" basis, with the Malaysian government offering a bounty if the plane can finally be found.
I think our limited knowledge in the field of neuroscience and the mechanics of the brain is probably what's holding back from massive technological progress in regards to Artificial Intelligence. I mean, true A.I that can think for itself and learn on it's own, without being fed human training data, still does not exist, mostly because we don't know the magic ingredient that breeds creative thought in humans yet. We are currently simplifying the process to combining hundreds of human ideas that already exist to simulate creativity, but the question then becomes where did those thoughts and ideas originally come from? Somewhere, they must have just appeared in someone's mind, somehow. And we don't know why that is. Until we do, it will be near impossible to make a true, fully functional replica of human intelligence artificially.
Current "A.I" is good at running through hundreds of millions of possibilities until it finds one that works. It can do trial and error at a rate humans cannot. But it doesn't know how to make those possibilities itself. And that's it's biggest weakness.
Some things seem to take a lot of time. I gather the new (and final?) search for MH370 is only now just about to start. The plane went down 11 years ago.
I think our limited knowledge in the field of neuroscience and the mechanics of the brain is probably what's holding back from massive technological progress in regards to Artificial Intelligence. I mean, true A.I that can think for itself and learn on it's own, without being fed human training data, still does not exist,
Not really true (any more than it is for humans, who are all "fed human training data". Especially with the latest versions that learn in a different way. Remember how Alpha Zero learnt? Humans taught it the rules of chess and it played against itself until it was the best go, chess and shogi player in the world. This is an element that has been added in the latest general AIs such as GPT-4o.
mostly because we don't know the magic ingredient that breeds creative thought in humans yet.
We can however be pretty sure that there is no magic ingredient. It's all about emergent complexity. Brains just fire neurons and form connections between them. But they do this with a heck of a lot of neurons and even more connections. The magic is the scale, mainly.
We are currently simplifying the process to combining hundreds of human ideas that already exist to simulate creativity, but the question then becomes where did those thoughts and ideas originally come from? Somewhere, they must have just appeared in someone's mind, somehow. And we don't know why that is. Until we do, it will be near impossible to make a true, fully functional replica of human intelligence artificially.
I see this as an imaginary problem.
Current "A.I" is good at running through hundreds of millions of possibilities until it finds one that works.
That is not at all now modern AI works.
It works more like a human does, step by step - "thinking" is the process that was absent from earlier LLMs but a big deal now, but thinking consists of a series of step by step constructions of an understanding of the topic (expressed as a sequence of words or other tokens). And if it finds that is not enough it does the same again starting from a different point (learning from what it has done so far). Each step is the application of the model it is learnt, but all that step does is generate the next token in either its thinking or in the response it provides. The two are very similar, just like they are in humans. It it not like a chess program examining millions of parallel possibilities. It does generate text in a stochastic way (via generating the probability that the next token is each of the huge number of possibilities (say a dictionary), but when working it only generates one.
It can do trial and error at a rate humans cannot. But it doesn't know how to make those possibilities itself. And that's it's biggest weakness.
Have you even learnt anything about how modern AI works?
Let me provide an analogy. A human writer produces words in a sequential manner. It is a fact that the next word they produce is a consquence of their entire mental state, incorporating all of their experience, and also some internal models - say a plan for how a novel will develop. AIs can emulate that, and while earlier ones could only do the stochastic generation of words, the recent ones can also do the construction of a hidden plan which they use to guide the text generated.
Models which are in some ways similar (and different in others) are getting very good at producing art, video, music and much else.
This is a decent catch-all, but ultimately any of these new technologies that take off will probably need its own thread.
Unusually, I appreciate the bump, because I had lost that piece of AI artwork I commissioned. Click and enjoy.
I have never been aware of the Waitresses or their Christmas song.