This is a forum for those interested in paticipation regarding the foundational theories of neuro-computation and neuro-cognition.
This website is currently still under construction. Please scroll down to get a first glimplse at some of the topics it will cover. Open participation will be enabled at a later point in time.
T O P I C S:
Hierarchical Temporal Memory – HTM – (Jeff Hawkins)
Sparse Distributed Representations – SDRs – (Pentti Kanerva)
Latent Semantic Analysis – LSA – in NLP [1]
Singular Value Decomposition – SVD – in NLP [1]
Random Projection – RP – in NLP is just a dot product between the input data and a randomly chosen vector (Used for dimensional reduction)
Network Neuroscience – NNSc – (Danielle S. Bassett and Olaf Sporns) in Nature Neuroscience, Vol. 20, pages 353-364; March 2017.
Graph Theory Methods: Applications in Brain Networks (Olaf Sporns) in Dialogues in Clinical Neuroscience, Vol. 20, No. 2, pages 111-121; June 2018.
[1] Landauer & Dumais, 1997, “A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge”
Links:
https://arxiv.org/abs/1509.02897 9
Diagrams:
Critical Branching:
Article 001:
How the Mind Arises: Network Interactions in the Brain Create Thought Scientific American, July, 2019
Here is an issue from Scientific American that all of us in the AI-Code-X community should take a much closer look at. I happen to be subscribed to the printed version and found this cover article by Max Bertolero and Danielle S. Bassett (both at Complex Systems Group at the University of Pennsylvania) very insightful into the higher level network dynamics of the neocortex. I beleive they may help us shed some light on the H in HTM (Hierarchy).
Network Neuroscience is the new term, coined by Bassett. The article explores the recent discovery via fMRT studies of network dynamics in the cortex, which can be broken down into 7 brain modules. (I would call these modular ensembles, because these modules are not strictly isolated, localized regions). These modules have the property of always firing in sync, within the module. Each module contains nodes that tend to activate all sections within its boundaries, forming a synchronized ensemble (module). Graph theory and simulations run by Olaf Sporns have been applied in this study.
The seven brain modules are listed as:
- Visual,
- Attention,
- Frontoparietal Control,
- Somatic Motor,
- Salience,
- Default and
- Limbic.
A series of psychological tests with given tasks, has allowed the team to understand which of these modules are involved and associated with each task-type. Some tasks linked to the visual module are, for example: Braille reading, Visual tracking, Action observation, Picture naming (silently), Brightness perception, Silent reading, etc. Interestingly, Braille reading also activates the Attention module. So some tasks activate multiple modules (ensembles). The Salience module seems to be involved in recognizing exceptions. Tasks like Breath Holding, or Awareness of need to urinate, or Stimulation monitoring or Word stem completion (silent). The Frontoparietal Control module is key to Reasoning, as in the Wisconsin Card Sorting Test, or Counting, or Tower of London (complex planning task), or Task Switching Control.
Each of these seven modules includes a set of regions, usually clustered but not always contiguous. So what keeps them together are a set of nodes (small node regions) that interlink the areas in each of the seven modules. There are also some hubs (super-nodes) that inter-connect some of the nodes across some the seven modules. (These are like bridges that connect two modules). These inter-modular hubs activate two or three of the modules during certain given tasks. The article shows the links between a long list of tested tasks (around 73) and the association of each task to some of the seven modules. The strength of these associations is also shown in the diagram.
I highly recommend reading this article. I am attaching a link to the website from Scientific American, but the aricle is unfortunately not free to access without purchase.
Kind regards, Joe (in Germany)
https://www.scientificamerican.com/magazine/sa/2019/07-01/
My personal take on this article:
Because this research is based on data obtained via fMRI, the granularity (or resolution) of the data, is very rough and unable to capture neural activity taking place in very small groups of neurons, like the mini cortical columns in the neocortex. For this reason, the findings remain relatively general in nature. However, I do see a very interesting implication that may be derivable from this research, if further analyzed via methodologies with higher cell-activity resolution. The 7 modules defined in this study, (modular ensembles), could actually be 7 coexisting hierarchical constructs, that share certain elements while solving certain tasks. From a logical perspective, one could visualize the brain as consisting of 7 parallel, coexisting, intertwined hierarchical pyramids. This would be a very valuable discovery, if it can be confirmed or refined.
Link to the Numenta HTM-Forum discussion about this article:
Hierarchical Neuro Associative Network Formation and Solution Path Indexing
This is a concept I am taking on within the context of HTM (Hierarchical Temporal Memory), because I perceive this to be very close to the core of universal cognitive processing and episodic memory in higher evolved species. This concept incorporates two important aspects of cognition into one central concept: 1. Learning and Knowledge Acquisition must be incorporated into an associative structure. 2. This associative structure of acquired knowledge, must support the path-search for needed solutions in the future. Therefore, the same associative structure logic must be applied during knowledge acquisition, as will be also needed and used at a future time for solution search. Furthermore, the process of laying out the associative structure of knowledge during acquisition, must also take into account all relevant, existing knowledge structures, avoiding memory redundancies and potential contradictions. Nevertheless, a process of continual knowledge structure review and consolidation must also be put in place, and temporary redundancies must inevitably be tolerated by the associative cognitive system.
It is within the context of these hierarchical neuro associative networks (HNANs) that other observed cognitive phenomena such as “cognitive dissonance” and “re-learning in sleep phases” find a foundational explanation. One important question is, whether this HNAN formation during knowledge acquisition is a univeral aspect of all learning in brains, not only at higher levels of knowledge abstraction, but also at lower sensory-motor levels of learning? My current hypothesis of HNAN formation is that there is a fundamental set of universal knowledge associative rules (KARs), that is applied by brains for all levels of learning, while there may be several additional extentions to the set of universal KARs for higher levels of abstraction. This would make sense, as for higher levels of complexity, some additional rules may prove useful. But this is not based on awareness of any empirical evidence at the time of writing. It is also my conjecture, that higher abstraction level are indeed complexer than lower levels of sensory information acquisition. This may very well prove to be incorrect. But we can also not discard the possibility, that the extended set of KARs differs at different levels of the hierarchy. In any case, I think it is safe to assume that a universal core of KARs are in place at all levels of HNAN formation. And this univeral core of KARS lays down the foundation for all future solution search paths. Additionally, there also must be a process for HNAN adjustments, as future solution searches prove them to be necessary. These HNAN adjustments must always be conservative, meaning they do not destroy older, discarded associations for a very long period of time. This allows for associative path corrections that remain reversible for a long period of time.