Human Brain Vs. Computer Brain
HUMAN BRAIN Vs. COMPUTER BRAIN - Geoffrey Hinton
The theory of the ‘Brain’ explains that the brain functions like a ‘computational machine’. The visual cortex of the brain is the area of the cerebral cortex that processes the visual information first. It refers to the brain's ability to draw conclusions from the information absorbed through the eyes. Visual perception is necessary for reading, writing, and movement. Neuroscientists start to piece together some models of the visual pathway or of the brain of mammalian brain which is that there is a hierarchy of information processing.
In the 1980’s, there were two schools of thought in AI. There was ‘Mainstream AI’ and then there was Geoffrey Hinton version. The ‘Mainstream AI’ folks thought that AI was all about ‘Reasoning and Logic’. Geoffrey Hinton’s work was in ‘Neural Nets’ which were not considered ‘AI’ then. Neural Nets were a study of biology and the mind and Hinton was passionate about those since he believed those were the only things that really worked. He based his theory on the idea that ‘Connections between Neurons change and that's how you Learn’.
On the other hand mainstream AI based its theories on ‘Reasoning and Logic’. It turned out in the long run that Geoffrey Hinton was right. But in the short term it looked kind of hopeless and he was even made fun off because Neural Networks weren't working really well in the 1980s. But this was because the computers weren't fast enough and the data sets weren't big enough. In the 1980s the big issue was; could you expect a big neural network with lots of neurons in it, compute nodes, and connections between them, that learns by just changing the strengths of the connections’. Could you expect to just look at data and with no kind of innate prior knowledge and learn how to do things. And, people in Mainstream AI thought that was completely ridiculous. It sounded a little ridiculous then but it works now because there is ample compute and data resources.
Geoffrey Hinton proved the computers modeled around the brain worked because the Brain works. Humans and animals can do things because of the brain. And most importantly, how come we humans can do things that we didn't evolve for - like ‘reading’. Reading is a much too recent development for us to have had significant evolutionary input to it. The brain allows humans to ‘learn to Read’. And we can learn mathematics. We can learn that and more. So there must be a way to learn in these biological neural networks in the brain.
Geoffrey Hinton concluded that there's are two different paths to intelligence. One path is a ‘Biological Path’ – i.e. the human brain, where you have hardware that's a bit flaky and ‘Analog’. What we have to do is communicate by using Natural Language and, also by showing people how to do these ‘imitations’ and things like that. With the Analog hardware, we can only communicate what we can say in a sentence - which is not that many bits per second. So humans are really bad at communicating.
The other path is ‘Digital’ – i.e. the current Neural Network models running on Digital Computers. These computers are able to communicate ‘one hundred trillion’ numbers. The communication bandwidth of modern-day computers is huge. And because they are exactly the same model, they are clones of the same model running on different computers. And because of that, they can see huge amounts of data. Because different computers can see different data and then they can combine what they learned, this is far more than any person could ever comprehend. But why are humans still smarter than them?
The current ‘Digital brains’ in computers are like ‘idiot savants’. ChatGPT knows much more than any one person. If you had a competition about how much you know, ChatGPT would just wipe out any one person. It can do amazing things. It can write poems. It can all that stuff. But it is not good at reasoning. Computers are not so good at ‘Reasoning’. We are better at reasoning. We have to extract our knowledge from much less data. We have a hundred trillion connections, most of which we learn. But since we only live for a billion seconds which isn't very long. Whereas models like ChatGPT have run for much more time than that to absorb all this data but on many different computers.
(An average human’s life 200 years ago was just 35 years. 100 years ago it was only 48 years. That is approx. 1.5 Billion seconds. Over the last 200 years, U.S. life expectancy has more than doubled to almost 80 years or approx. 2.5 Billion seconds due to vast improvements in health and quality of life. And keep in mind many human brains were wasted in wars, famine, disease, illness etc. Some had extremely valuable data.)
For example in the 1980’s Geoffrey Hinton was working on Neural Networks. If we took a computation that we do today (in 2021) and used a 1986 computer model to run and start learning something on, it would still be running now and not have gotten anywhere. But that same computation would now take a few seconds to learn in modern computers.
What was holding us back in 1986 was much lack of computing power and absence of large data sets to train on. Everything would have worked then if we had enough data and enough compute.
In the 1990s computers were improved. There were other learning techniques that were on small data sets that worked at least as well as Neural Networks and were easier to explain and had much fancier mathematical theory behind them. That made people within computer science lose interest in Neural Networks. Within psychology, scientists and researchers did not because within psychology they are interested in how people might actually learn. And these other techniques looked even less plausible than ‘Back Propagation’ here.
Last updated