A friend of Facebook posted a link to this article on Aeon by Robert Epstein, The Empty Brain.
The sub explains the argument this essay makes well:
‘Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.’
When you think about it this seems obvious. Brain cells are not bytes. We do not ‘store’ memories or symbols or anything else in specific cells. What we learn, how we change, how we recall and then recount information varies from person to person, experience to experience and will never be found ‘stored’ in perfect replication inside a person’s brain. We do not have root drives, we do think accordingly to algorithms. We do not translate the world into ‘0’s’ and ‘1’s. Yet the thinking around the brain being like a computer is so pervasive it’s almost impossible to think about it in any other way.
By looking at metaphors of intelligence through history and comparing them with the current dominant metaphor that the brain operates like a computer, he argues that in time we are likely to see it as just as silly as notions that intelligence and the brain are the work of humors or internal gears and sprockets.
He also points out the faulty logic in the premise:
‘The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.’
Essentially this essay is a call to move on from an IP model of understanding the brain. What it subconsciously raised, however, is the lack of alternative metaphors available. After all we do translate the world into something, not ‘0’s’ and ‘1’s exactly, but something.
This too is something we are struggling to understand. We are only just starting to map and document how the brain changes through experiences or disease or simply learning, we can see neural synapses change, identify proteins and chemical changes but we don’t yet know why these things happen or how important they are.
Reading it, however, I had a number of thoughts of my own.
I hadn’t previously thought much about the IP model of intelligence but as soon as I read this essay the understanding that it is just a concept among many grabbed my imagination. It’s analogous to when looking at an image in positive space you suddenly see another in the negative, of a white image drawn from the outline of a black one. Or when we learn to read. Once something is pointed out and understood you can’t easily ‘unsee’ it.
Similarly, the words we use to describe and understand arguments like this are deeply loaded with unseen concepts. Words are conceptual grenades, after all, they come packed with overlaid values and assumptions.
Epstein suggests that we don’t need the metaphor of a computer to study the brain, pointing to cognitive scientists who are trying to see the brain in a more ‘naturalistic’ way or ‘anti-representational’ view of human functioning. It will take me a lot longer than it has to read and write this to come to understand the ideas of the two psychologists he points to: Andrew Wilson and Sabrina Golonka from Leeds Beckett University in the UK who blog at http://psychsciencenotes.blogspot.com.au/p/about-us.html.
What I am struck by from the outset, however, is that the logic behind this notion is just as problematic as in the example he provided earlier. Consider this: Premise 1: Metaphors are used to help us learn, we use them to interpret the world. Premise 2: Outmoded metaphors can hinder our understanding of the world. Conclusion: We don’t need metaphors to understand or learn about the world.
I don’t think that is true. We use metaphors to help us think; often we bundle them into things we call concepts and words. Concepts are intrinsic to how we learn, interpret, share and approach the world. If evidence and new experiences make a concept or a metaphor we have been using redundant it doesn’t mean we don’t need a metaphor; it means we need a new one.
Epstein makes two other conceptual slides of hand in the essay that I felt needed to be more closely scrutinized. By using an example of trying to draw a picture of a $US1 bill from memory as opposed to from observation he says:
From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
Firstly, the three examples given are only of ‘special note’, so we must assume there are others that are unlisted. Then are the three examples given logically that different? They each denote their own assumptions. I think what he’s trying to suggest is a three-stage argument: We learn from observations, learned experience, and predictive logic, i.e. this is happening, in the past that happened when I experienced something similar, therefore I believe something similar will happen.
What truly, logically, separates these three categories? Are we changed by observation? Can this be separated from any of the other processes? Surely an observation is not in and of itself, a change. What one person experiences because of what is observed may vary to another person’s experience.
Do we learn or are we changed by ‘pairing’? What does that mean? When we hear a police car and then moments later see it, we will learn to associate the two. Okay, perhaps this is a form of learning. But when you think about it we teach toddlers to recognize pictures of police cars by reading the books and we make the sound of a siren as we read. The two experiences, seeing and hearing, are tied closely together from the outset to form a conceptual whole that allows us to understand the word and its concept, like ducks going quack or cows going moo. Many children will have learnt these associations long before they ever see a police car or a duck or a cow in real life. We explicit teach children to think in this way; to think in words and in concepts, to associate words with images and sounds as a first step toward learning more complex concepts and linking them to words. Would a child outside of this social norm think that way? I don’t know. Can we ever really know?
Then most problematically to me, is this notion that we learn or are changed through punishment or reward. This idea is in itself underpinned by a pervasive assumption, based on a particular theory of human behavior, that we will change our behavior based on our experiences. It has at its core another very powerful concept, that we are utility maximizing rational individuals, the same notion tied up so explicitly in neo-classical economic philosophy. It assumes we will act in our best interests; that we will innately act to maximize our own utility, and therefore ‘learn’ to behave in ways that do so. Simple, but not true. Not always. It’s a model that explains only some of our behavior, like various other metaphors we use. The idea that we will learn based on punishment and reward is as much of an intellectual crutch or metaphor as the idea that we will learn and behave like computers. By not examining what underpins each concept at every stage, we potentially evade effective understanding.
Finally, I was concerned by Epstein’s use of the word, ‘orderly’.
‘Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience….’ For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.’
On what basis is he using the word orderly? Again, it feels loaded. And mechanical. Read that again, ‘all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences.’ This has a lot of nested logic. Can we function in the world without our brains changing in ‘orderly’ ways? Can we function in the world and have brains that changing in unorderly ways? Will our experiences always lead to our brains changing in orderly ways? What happens when they don’t? Nor, does it follow that being able to recall a piece of music or a song sequentially suggests the brain learnt it in an orderly way. Is that the suggestion? I’m not sure.
We teach things logically because we value logical thinking. Teaching things logically encourages logical thinking and is rewarded. Does it follow that logic is the only way to learn? Yet valuing logic is why we liked the analogy of the human brain being like a computer in the first place. Computers are logical. Computers can be understood. We want to think of ourselves and our brains as logical, we’d like to think our brains can be understood. But our thoughts and actions are not always logical. Our brains are not like computers. Emotion and genetics and experiences all create permutations that are not understood. The ‘logic’ of how our brains work may exist but is not yet understood. The outcomes of ‘logical’ brain activity may not be logical.
Since my daughter was diagnosed with Asperger’s and I was identified as likely also being on the spectrum I’ve thought a lot about how brains work, what we call normal and what we think of as intelligence.
As we were trying to come to grips with what Asperger’s is and what impact it might have on our daughter’s life we were offered a variety of reading suggestions. One of which was Making Sense of Asperger’s: A story for children by Debra Ende, which specifically uses the analogy of an Asperger’s brain being like a different ‘operating system’ to normative brains, like comparing PCs and Macs. The idea that human thinking is like that of a computer, that our brains are programmed from birth, is as deeply embedded in thinking about Asperger’s as it is in a variety of other academic forms of thinking about intelligence and human behavior.
From the outset, however, I’ve struggled with all of these ideas to explain what I see in my daughter and how I perceive my own thinking. When you are told you don’t think like ‘normal’ people it’s quite startling. To then try and reverse engineer that to work out how ‘normal’ people think is extremely difficult as you start from an untranslatable position.
I find I often don’t agree with how ‘normal’ people try to explain how someone with Asperger’s thinks; something that’s even further complicated by the diversity of symptoms and expressions of Asperger’s and Autism Spectrum Disorder between individuals. It becomes circular: the concept of what it’s like to have ASD you describe doesn’t seem to relate to me but it may still apply to some people. I can’t tell because I don’t know all people. So therefore maybe it does relate to some people but just because they are supposedly like me, doesn’t mean you are describing what it’s like to have ASD. If there are so many ways of experiencing ASD maybe we haven’t defined ASD very well. But as only ‘normal’ people can diagnose us as not being ‘normal’, how can you win?
What I have come to think is that how we define intelligence and the consensus around what constitutes ‘normal’ are deeply flawed. The limitations of the IP model are very much ingrained in this problem.
A key component of the diagnosis, for instance, was the spread of abilities across different sub-areas of intelligence in a standard intelligence test. That is to say, most people, when they are given an intelligence test, receive similar sub-score results across a range of cognitive areas, when averaged this sub-scores provide the overall ‘IQ’ number. In people who are not neurotypical, the IQ number is seen as less effective at describing the person’s intelligence, as there are significant differences across different sub-score results.
The model and tests for intelligence, however, seem to draw on assumptions derived from the IP metaphor of the brain; by proposing that there are effectively different measurable sub-routines within the brain, ie. there are ways of perceiving and thinking that are separate from other processes, logical versus creative thinking for instance, or verbal versus visual thinking, recalling stored memories versus processing memories. This is often described as the difference between memorizing a phone number compared to reciting the same phone number backwards. One is ‘recall’ the other is ‘inverting’ or ‘processing’ that knowledge. These are seen as different operations of intelligence. This is often described as analogous to REM and RAM memories in computers.
I don’t think we yet understand why certain areas of the brain appear to work in sync on specific tasks and functions. We think of the brain as one organ, while recognizing that different parts of it have different functions but the way they interconnect still being explored.
As an aside, writing this explains why I usually have many, many tabs open in my browser. Reading one article and thinking about it can lead to writing over 2000 words. Reading can be a very slow process.