lukeschlather
yesterday at 7:53 PM
This is a really good overview, and it seems remarkably not needing much modification after several decades, at least in terms of the facts and things it predicts everything has happened as the author says. I do want to pick at some of the numbers in the upper bound because obviously we're getting close to the end of the first third of the century and we don't have ASI yet even though we have roughly hit the upper bound the author defines.
> Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate)
I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.
The author has a note where they handwave away the possibility that chemical processes could meaningfully increase the operations per second, and I'm comfortable with that, but this point:
> Perhaps a more serious point is that that neurons often have rather complex time-integration properties
Seems more interesting. Especially in the context of if there's dramatically more storage available in neurons/synapses. If a neuron can do maybe some operations per minute over 1GB of data per synapse, for example. (Which sounds absurdly high, but just for the sake of argument.)
And I think putting some absurdly generous upper bounds in might be helpful since, we're clearly past the 100TOPs, asking, like, how many H100s would you need if we made some absurd suppositions about the capacity of human synapses and neurons? It seems like, we probably have enough. But also I think you could make a case some of the largest supercomputing clusters are the only things that can actually match the upper bound for the capacity of a single human brain.
Although I think someone might be able to convince me that a manageable cluster of H100s already meets the most generous possible upper bound.
RaftPeople
today at 3:32 PM
> I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.
Based on lots of reading about brain research and the relentless flow of new and unknown things that need further research, my personal gut feel is that the estimates in that paper about brain computational ability don't really have a valid foundation. There are too many things discovered since then and too many things still not understood.
Some interesting items:
1-Astrocytes are computational cells which need to be included in the math. They have internal calcium waves localized in their processes as well as across the entire cell and inter cell.
2-Recent research showed neuron signal timing down to the millisecond level carries information.
3-Individual cells (neurons and non-neurons) learn, they don't require a synapse and external cell for that capability
4-Neurons are influenced by the electromagnetic field around them and somehow that influence would need to be included in a calc on information flow
kelseyfrog
yesterday at 8:05 PM
A 5090 has a peak theoretical limit of GenAI 3356TOPS. So we're "already" an order of magnitude greater than what was considered enough for AGI. One question is, "What happened here?" Was the original estimate wrong? Have we notfound the "right" algorithm yet? Something else?
lukeschlather
yesterday at 8:33 PM
"We haven't found the 'right' algorithm yet." seems like the obvious answer, but the numbers in the paper all make sense and I'm interested in some more exotic explanations why it could actually be some orders of magnitude more than a 5090.
Although that's not looking at memory, and I am also interested in some explanation of what... a 5090 has 32GB which, a human brain has more like a petabyte of memory assuming 1 byte/synapse. Which is to say 1 million GB in which case even a large cluster of H100s has an absurd amount of TOPS but nowhere near enough high-speed memory.
We are constantly learning (updating the network) in addition to inference. Quite possibly that our brains allocate more resources to learning than to inference.
Perhaps AI companies don’t know how to run continuous learning on their models:
* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction
* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans
SoftTalker
yesterday at 8:10 PM
Nature needed 3.5 billion years to work it out, and we're going to solve it in a few decades?
kelseyfrog
yesterday at 8:24 PM
It depends on where we draw the starting line. We're already at parity with 3.5BYA to 541Mya because no neurons existed in that duration. Only more recently, in the Cambrian, do we have evidence that voltage gated potassium signaling evolved[1].
That changes the calculus likely very little, but it feels more accurate.
1. https://www.cell.com/current-biology/pdf/S0960-9822(16)30489...
mathgeek
yesterday at 8:39 PM
I know it’s a silly question to begin with, but if you analyze it seriously, you’d want to at most compare human intelligence->superintelligence with the 20 million years between the first homidinae and homo (and even that is probably too large for some folks to compare with).
One could even argue you should only compare it back to the discovery of writing or similar.
Jyaif
yesterday at 8:46 PM
That's not an argument. Nature never worked out going into space, yet we solved it in a few decades.
jll29
yesterday at 9:48 PM
Yes but that's "in a few decades" ON TOP of millions of years.
If I had to give an estimate, I would consider less the time taken to date, but the current state of our knowledge of how the brain works, and how it has grown in the last decades. There is almost nothing that we know so little about as the human brain, how thoughts are represented, modern imaging techniques notwithstanding.
> Yes but that's "in a few decades" ON TOP of millions of years.
If that's the bar, then anything else can fit in "a few decades", since that also rests "ON TOP of millions of years".
SoftTalker
yesterday at 9:18 PM
It worked out flying though, millions of years before we did and we still don't do it as well. We can't even do walking as well as nature did.
Walking is easy compared to elbows, fingers and thumbs. It’s just falling over in a controlled fashion. I hear at least one company in Boston figured it out.
Anyway, humanoid robots should be big in the next 10-20 years. The compute, the batteries, the algorithms are all coming together.
derektank
yesterday at 10:17 PM
We do flying better. If you adjust for our body weight, a modern airliner uses less energy per traveller mile than your average migratory bird. And the airliner goes much faster.
gnz11
yesterday at 9:55 PM
One could argue nature solved it by evolving homo sapiens.
Re the capabilities of neurons, the argument in Moravec's paper seem quite solid, comparing the capabilities of a bit of the brain we understand quite well, the retina, to computer programs doing the same function.
My feeling is we have enough compute for ASI already but not algorithms like the brain. I'm not sure if it'll get solved by smart humans analysing it or by something like AlphaEvolve (https://news.ycombinator.com/item?id=43985489).
One advantage of computers being much quicker than needed is you can run lots of experiments.
Just the power requirements make me think current algorithms are pretty inefficient compared to the brain.