# Is the Mind a Natural Intelligence (NI) Large Language Model (LLM)?
The mind/brain has been compared to the software and hardware in a digital computer by decades of "computationalist" cognitive scientists.
We suggest that a much better, and much simpler, computer science parallel with human mental activity is the large language model (LLM) in today's artificial intelligence (AI). A chatbot reply to a question is prepared from pre-trained sequences of words with high transition probabilities from the sequence of words in the question.
Compare the past experiences reproduced by the [Experience Recorder and Reproducer (ERR)](https://www.informationphilosopher.com/knowledge/ERR/)1. They are past experiences which are stimulated to fire again because the pattern of current somatosensory inputs or simply our current thinking in the prefrontal cortex resembles the past stored experiences in some way. The ERR is an extension of [Donald Hebb](https://www.informationphilosopher.com/solutions/scientists/hebb/)'s "neurons that fire together get wired together." The ERR assumes that neurons that have been wired together in the past will fire together in the future," as first noted by [Giulio Tononi.](https://www.informationphilosopher.com/solutions/scientists/tononi/)2
We can say that the brain is "trained" by past experiences, just as a large language model is trained with sequences of words. And like the LLM, a new experience or our current decision making will recall/reproduce experiences that are statistically similar, providing the brain/mind with the context needed to interpret, to find meaning in, the new experience and to provide options for our decisions.2
A new experience that is nothing like any past experience is essentially meaningless.
Does this parallel between artificial intelligence software running on digital computer hardware and human natural intelligence software running on analog brain hardware make sense? In the most popular consciousness models, such as [Bernard Baars](https://www.informationphilosopher.com/solutions/scientists/baars/)' Global Workspace Theory or the Global Neuronal Workspace Theory of [Stanislas Dehaene](https://www.informationphilosopher.com/solutions/scientists/dehaene/) and [Jean-Pierre Changeux](https://www.informationphilosopher.com/solutions/scientists/changeux/), the fundamental idea is that information is retrieved from its storage location and displayed as a _representation_ of the information to be processed digitally and viewed by some sort of executive agency (or Central Ego as [Daniel Dennett](https://www.informationphilosopher.com/solutions/philosophers/dennett/) called it).
Unlike computational models, which have no idea where information is stored in the brain, the ERR explains very simply where the information is stored. It is in the thousands of neurons that have been wired together (in a Hebbian assembly). The stored information does not get recalled or retrieved (as computers do) to create a representation that can be viewed in a mental display. We can more accurately call it a direct reproduction or re-presentation to the mind.
Our hypothesis is that when multiple Hebbian assemblies of wired-together neurons fire again because a new experience has something in common with all of them, they could create what [William James](https://www.informationphilosopher.com/solutions/philosophers/james/)' called a "blooming, buzzing confusion" in the "stream of consciousness." They would generate what James called [alternative possibilities](https://www.informationphilosopher.com/freedom/alternative_possibilities.html), one of which will get the mind's "attention" and its "focus." Since each Hebbian assembly is connected to multiple regions in the neocortex, e.g., visual, auditory, olfactory, somatosensory cortices, and to multiple nuclei in the sub-cortical basal ganglia, like the hippocampus and amygdala, when it is chosen all those connected brain areas would be bound together again.
Very simply, everything going on in the original experience is appearing again, perhaps weakened compared to the original, as [David Hume](https://www.informationphilosopher.com/solutions/philosophers/hume/) feared for his "impressions." The mind is "seeing" the original experience, not because the brain has produced a visual representation or display for a conscious observer to look at. The brain/mind is also "feeling" the emotions of the original experience, as well as seeing it in color, solving [David Chalmers](https://informationphilosopher.com/solutions/philosophers/chalmers/)' "hard problem" of the subjective qualia.
The ERR is simply reproducing or "re-presenting" the original experience in all parts of the mind connected by the neural assembly, solving the so-called "binding problem." The unification of experience is because the information stored is distributed throughout the Hebbian assembly and all the brain elements that its neurons are connected to.
The ERR is a presentation or re-presentation to the conscious mind, not a representation on a screen as in Global Workspace Theories and their "theater of consciousness."
In a break from computational models of the mind, we can assert that man is not a machine, the brain is not a computer, and although the mind is full of immaterial information stored in the material brain, the information is not being processed digitally by a central processor or parallel processors.
We can also say that the Crick and Koch neural correlates6 of a conscious experience are just those neurons wired together in the Hebbian assembly created by the experience.
Our Natural Intelligence LLM is _human intelligence_, but it is built on the ERR model and the Two-Stage Free Will model endorsed by [Martin Heisenberg](https://www.informationphilosopher.com/solutions/scientists/heisenbergm/) in 2010 as explaining "behavioral freedom" in lower animals such as fruit flies and even bacterial chemotaxis.4 As such, the human mind can be seen as evolved from the lowest animal intelligence and even from single-celled organism intelligence, although bacterial experiences are not learned but acquired genetically.
Stephen Wolfram has concisely explained the workings of an LLM...
**What Is ChatGPT Doing... and Why Does It Work?**
It’s Just Adding One Word at a Time
That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT— and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on — and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)
The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities” 5