For people wanting to look into HTM (Hierarchical Temporal Memory), do check out Numenta's main website [1], in particular the papers [2] and videos [3] sections.
Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.
But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.
I'm curious how close the research community is to general AI
Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.
Also, are there useful books, courses or papers that go into general AI research?
If you're interested, you should read On Intelligence[1] by Jeff Hawkins (inventor of the Palm Pilot). In it, Hawkins presents a compelling theory of how the human brain works and how we can finally build intelligent machines. In fact, Andrew Ng's Deep Learning research is built on Hawkin's "one algorithm" hypothesis.
Actually, there's a growing amount of evidence that there's a single, general-purpose algorithm in the human brain that gives rise to intelligence. For one, there's the fact that every part of the brain looks and behaves the same. There's also the fact that the brain is very plastic in what it learns – the auditory cortex can learn to "see" if we were to rewire the signals from the eyes from the visual cortex to the auditory cortex. It's very unlikely that our brain is hard-wired to recognize faces, for instance, but rather that it learns to do so using this generic learning algorithm.
There's nothing about the idea of physical consciousness that says it has to be a continuum -- there could just be some critical mass or qualitative attribute of brains that puts us "over the threshold", so to speak. Nobody can give any kind of a definitive answer. For ideas about a "continuum" of consciousness, you might read Phi:
Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.
But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.
[1] https://www.amazon.com/Intelligence-Understanding-Creation-I...
Edit: Fixed book link.
Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.
Also, are there useful books, courses or papers that go into general AI research?
Of course there are. See:
https://www.amazon.com/Engineering-General-Intelligence-Part...
https://www.amazon.com/Engineering-General-Intelligence-Part...
https://www.amazon.com/Artificial-General-Intelligence-Cogni...
https://www.amazon.com/Universal-Artificial-Intelligence-Alg...
https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...
https://www.amazon.com/Intelligence-Understanding-Creation-I...
https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...
https://www.amazon.com/Unified-Theories-Cognition-William-Le...
https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...
https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...
https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...
https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...
See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,
https://en.wikipedia.org/wiki/Cognitive_architecture
"Neuvoevolution"
https://en.wikipedia.org/wiki/Neuroevolution
and "Biologically Inspired Computing"
https://en.wikipedia.org/wiki/Biologically_inspired_computin...
https://www.amazon.com/Intelligence-Understanding-Creation-I...
[1]:
I urge you to watch Andrew Ng's talk that I linked to in the post, and read On Intelligence () by Jeff Hawkins, a book that totally changed the way I look at intelligent behavior.
http://www.amazon.com/Phi-A-Voyage-Brain-Soul-ebook/dp/B0078...
Or for other views, you might check out V.S Ramachandran (neuroscience): http://www.amazon.com/Brief-Tour-Human-Consciousness-Imposto...
Jeff Hawkins (computer science): http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...
Hofstadter (mathematics, cognitive science): http://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/...
Those are some of my favorite popular-press books on the subject.