That’s capturing everything. Ultimately you need only a tiny fraction of that data to emulate the human brain.
Numenta is working on a brain model to create functional sections of the brain. Their approach is different though. They are trying to understand the components and how they work together and not just aggregating vast amounts of data.
You find a computer from 1990. You take a picture (image) of the 1KB memory chip which is on a RAM stick, there are 4 RAM sticks. You are using a DSLR camera. Your image in RAW comes out at 1GB. You project because there’s 8 chips per stick, and 4 sticks it’ll 32GB to image your 4KB of RAM.
You’ve described nothing about the ram. This measurement is meaningless other than telling you how detailed the imaging process is.
Of course, not to say the data isn’t also important though. It’s very possible that we’re missing something crucial regarding how the brain functions, despite everything we know so far. The more data we have, the better we can build/test these more streamlined models.
Ultimately you need only a tiny fraction of that data to emulate the human brain.
I am curious how that conclusion was formed as we have only recently discovered many new types of functional brain cells.
While I am not saying this is the case, that statement sounds like it was based on the “we only use 10% of our brain” myth, so that is why I am trying to get clarification.
They took imaging scans, I just took a picture of a 1MB memory chip and omg my picture is 4GB in RAW. That RAM the chip was on could take dozens of GB!
Not taking a position on this, but I could see a comparison with doing an electron scan of a painting. The scan would take an insane amount of storage while the (albeit ultra high definition) picture would fit on a Blu-ray.
Oh I’m not basing that on the 10% mumbo jumbo, just that data capture usually over captures. Distilling it down to just the bare functional essence will result in a far smaller data set. Granted, as you noted, there are new neuron types still being discovered, so what to discard is the question.
Given the prevalence of intelligence in nature using vastly different neurons I’m not sure if you even need to have an exact emulation of the real thing to achieve the same result.
No, that captures just the neuroanatomy. Not the properties like density of ion channels, type, value of the synapse and all the things we don’t know yet.
Never seen Numenta talked about in the wild! Worked with them on a pattern recognition project in college and it was freaky similar to how toddlers learned about the world around them.
That’s capturing everything. Ultimately you need only a tiny fraction of that data to emulate the human brain.
Numenta is working on a brain model to create functional sections of the brain. Their approach is different though. They are trying to understand the components and how they work together and not just aggregating vast amounts of data.
No it does not. It captures only the physical structures. There’s also chemical and electrical state that’s missing.
Think of this:
You find a computer from 1990. You take a picture (image) of the 1KB memory chip which is on a RAM stick, there are 4 RAM sticks. You are using a DSLR camera. Your image in RAW comes out at 1GB. You project because there’s 8 chips per stick, and 4 sticks it’ll 32GB to image your 4KB of RAM.
You’ve described nothing about the ram. This measurement is meaningless other than telling you how detailed the imaging process is.
Of course, not to say the data isn’t also important though. It’s very possible that we’re missing something crucial regarding how the brain functions, despite everything we know so far. The more data we have, the better we can build/test these more streamlined models.
These models would likely be tested against these real datasets, so they help each other.
I am curious how that conclusion was formed as we have only recently discovered many new types of functional brain cells.
While I am not saying this is the case, that statement sounds like it was based on the “we only use 10% of our brain” myth, so that is why I am trying to get clarification.
They took imaging scans, I just took a picture of a 1MB memory chip and omg my picture is 4GB in RAW. That RAM the chip was on could take dozens of GB!
Not taking a position on this, but I could see a comparison with doing an electron scan of a painting. The scan would take an insane amount of storage while the (albeit ultra high definition) picture would fit on a Blu-ray.
Oh I’m not basing that on the 10% mumbo jumbo, just that data capture usually over captures. Distilling it down to just the bare functional essence will result in a far smaller data set. Granted, as you noted, there are new neuron types still being discovered, so what to discard is the question.
I don’t think any simplified model can work EXACTLY like the real thing. Ask rocket scientists
Fortunately it doesn’t have to be exactly like the real thing to be useful. Just ask machine learning scientists.
Given the prevalence of intelligence in nature using vastly different neurons I’m not sure if you even need to have an exact emulation of the real thing to achieve the same result.
No, that captures just the neuroanatomy. Not the properties like density of ion channels, type, value of the synapse and all the things we don’t know yet.
deleted by creator
Never seen Numenta talked about in the wild! Worked with them on a pattern recognition project in college and it was freaky similar to how toddlers learned about the world around them.
i mean they probably use vast amounts of data to learn how it all works.
Point for simulation theory.