“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. “What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.” In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to AGI.
Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion… And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI’s imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world’s technologists have not yet dreamed up. There is no way of knowing how long that will take. “A system that’s better than humans in one way will not necessarily be better in other ways,” Harvard University cognitive scientist Steven Pinker said. “There’s just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven’t even thought of yet. There’s a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets.”
The funniest shit about AI is that even if the current line of research was promising (it isn’t beyond specific domain use cases), the ecological holocaust of the environment being caused by AI datacenters will destroy this planet as a habitable place WELL before we develop an artificial intelligence.
This is all so pointlessly dumb, on so many levels
OpenAI has contractually defined the development of AGI using a metric of chatgpt sales numbers so get ready for them to claim they’ve developed AGI even though they never will.
Yup, and that’s just one of many things that make me confident in my impulse to never trust OpenAI or any company that is just so obviously a money-grabbing grift.
[OpenAI and Microsoft] came to agree in 2023 that AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits. Source.
What a ridiculous way of thinking.
Thanks for citing it I didn’t feel like looking it up
wtf
I am not sure I believe in AGI. Like it will never exist, because it can’t. I could be wrong. Hell, I’m often wrong, I just don’t think a machine will ever be anything but a machine.
If intelligence occurred in us by pure trial and error there’s no reason it could not be made artificially.
How would you define AGI then? If the definition is just “intelligence” then I would say we are already there. I think the concept is infinitely complex and our human understanding may never totally get there. Again, I could be wrong. Technology changes. People said man could never fly too.
Removed by mod
Organic matter that can’t be replicated
Removed by mod
It’s organic matter that is created. It’s not AGI? This isn’t Black Mirror, you can’t duplicate consciousness from organic matter to a digital medium. We don’t even know what consciousness is. I understand that before the airplane, people thought that manned flight was impossible. How can we create consciousness when we don’t even know what the goal is? If the understanding of consciousness changes and the technology to create a digital rendition of it comes about, then clearly my position will change. I guess I just lack the foresight to see that understanding and technology ever coming to fruition.
Removed by mod
I don’t know why you are being rude.
A system in an organic medium, versus a system in a digital medium are completely different and your logic doesn’t apply to each one as if the word “system” allows you apply logic to both mediums. It’s like saying weather can happen inside my Nintendo, because they are both systems.
Saying that there’s an ability to create a being of understanding in a digital system is the same as saying we have the ability to travel faster than light through bending spacetime using a wormhole. Can you do it? Maybe, but current science says that there’s no evidence of the ability outside of thought exercises. Right now, it’s science fiction and there’s no current way of even testing the fact that it can exist.
Removed by mod
I believe it would be possible with quantum computing and more understanding of actual brains. All you have to do is mimic how a real brain works with electronics. Hell, you can make a computer out of brain cells right now, and it can solve problems better than these “throw everything at the wall and see what sticks” kinda AIs. Though… If a computer has a real biological brain in it doing the thinking, is it artificial intelligence?
What would quantum computing do to help?
Compute!
Quantumly!
“What we are building now are things that take in words and predict the next most likely word…”
This is a gross oversimplification and doesn’t reflect current understanding of how the most advanced LLMs work. Anthropic has recently published papers showing that Claude “sometimes thinks in a conceptual space” and will “plan what it says many words ahead”.
This doesn’t seem quite so different from human intelligence as what the summary suggests.
https://www.anthropic.com/news/tracing-thoughts-language-model





