I recently posted an article by Burkhard Bilger, published in the The New Yorker, on David Eagleman. The following post by Betsy Mason for the Wired Science blog looks at aspects of Eagleman’s work, such as the structure of our brain as different competing aspects being involved in decision making, the role the illusion of time plays in schizophrenia as well as the legal ramifications arising from the fact that neuroscience suggests that we are not equal before the law.
Most people probably feel like they know their own brain reasonably well. After all, our thoughts form the core of who we are, or at least who we understand ourselves to be. But it turns out we know only a tiny portion of what our brains are doing and where our own thoughts come from.
Neuroscientist David Eagleman takes us on an enlightening tour of all that our brains are up to behind our backs in his new book Incognito: The Secret Lives of the Brain. Get a preview in the audio excerpt and learn more about the book and Eagleman’s current research in the interview below.
Wired.com: Your book deals a lot with the idea that we are totally unaware of most of what goes on in our brains. Is this why we get hunches that seem to just materialize from nowhere?
David Eagleman: The main thing that inspired me to write this book is looking at all the ways the conscious is just the littlest bit of what’s happening in the brain. Your brain does these massive computations under the hood all the time. And a hunch essentially is the result of all those computations. So it’s exactly like riding a bicycle, the way you don’t have to be consciously aware, in fact you cannot be consciously aware. Your consciousness has no access to the operations running under the hood that allow you to ride the bicycle, or for that matter that allow you to recognize somebody’s face. you don’t know how you recognize somebody’s face, you just do it effortlessly.
In both of these cases, it’s very hard to write computer programs to do this stuff, to ride a bicycle or recognize somebody’s face, because there’s massive computation going on there that’s required. Your brain does this all effortlessly and the hunch is when it serves up the end result of those computations.
Essentially the conscious mind is like a newspaper headline in the sense that all it ever wants is the summary, it doesn’t need to know all the details of how something happened, it just wants to know, Obama’s in China or whatever it is. It doesn’t need to know every bit of the background of American history and Chinese history, it just wants to know what’s happening right now. And so a hunch is a way of summarizing vast quantities of data. And you may not have any conscious access to how the computation was made.
Wired.com: If our brains are working without us being consciously aware of it, how does that affect the choices we make?
Eagleman: When people go through marriage registries, they find that people are more likely to marry other people whose first name begins with the first letter of their own first name, so Alex and Amy, Joel and Jenny, Donny and Daisy, these kind of things. And if your name is Dennis or Denise you’re statistically more likely to become a dentist. This can be verified by looking in the dentist professional registries.
Also, people whose birthday is Feb. 2, are disproportionately more likely to move to cities with the number two in their name, like Twin Lakes, Wisconsin. And people born on 3/3 are statistically overrepresented in places like Three Forks, Montana, and so on.
Anyway, the point of all this is that it’s a crazy reason to choose a life mate or a city to live in or a profession, and if you ask people about why they made these choices, that probably would not be included in their conscious narrative. And yet it’s statistically provable that these things do have an influence in very subtle ways on our choices. People like brands, for example, that begin with the the same first letters as their first names, and they’ll be more likely to choose that brand just based on that, even though they’re not consciously aware they’re doing that.
Wired.com: So if we’re not consciously directing our own decision-making, how do our brains handle the process?
Eagleman: I make this argument about the brain being like a team of rivals. I synthesize a lot of data to show that you are not one thing, but instead your brain is made up of these competing networks that are all battling it out to control this single output channel of your behavior. And so your brain’s like a neural parliament, and you’ve got these different parties in there like the Democrats and Republicans and Libertarians, all of whom love their country and feel that they know the best way to steer the ship of state. But they have differing opinions on how to do it, and they have to fight it out.
This is why we can cuss at ourselves and cajole ourselves and get angry at ourselves, and this is why you can do behavior and look back and think, “Wow, how did I do that?” It’s because you are not one person, you are not one thing. As Walt Whitman said, “I am large, I contain multitudes.”
In the book I break down all these different competing parties in the brain. If you’re trying to understand yourself, and what kind of person am I, and why did I do that and so on, this gives you a much richer view of what’s actually happening under the hood so that you don’t suffer from the illusion that there’s a single “I” in there.
And once you start understanding this about yourself, then you can structure things so that you constrain your future behavior. For example, once you realize that your short-term instant-gratification circuits will be really tempted in a certain situation, you can then think about it in advance and make sure you don’t get yourself into that situation.
Wired.com: Besides short-term vs. long term interests, what are some of the other warring parties in your brain?
Eagleman: Another team of rivals is closely related to this issue of emotion and reason. You have certain parts of your brain that really care about essentially math problems and just adding things up and deciding on something in a purely logical way, and then other networks in your brain that are involved in what we generally summarize as emotion. And these largely have to do with monitoring your internal state, instead of the external world, and they have to do with judging how things will pay off for the system whether these things are good or bad.
And it turns out with neuroimaging you can see these things in competition when people are making a decision, let’s say a moral decision where logically you might feel like you want to go one way, but emotionally you feel like you want to go the other way. You see these battles in action.
I think an understanding of the teams of rivals in the brain allows us to really think more clearly about other people’s behavior. In the book I use an example of Mel Gibson, who made these antisemitic rants when he was drunk and then afterward wrote these letters of apology that seemed to be genuine. Everybody was arguing about whether he is an anti-semite or not an anti-semite. What I say in the book is, even though his behavior was despicable, are we obliged to think somebody is or is not something? Isn’t it possible for somebody to have both racist and non-racist parts of their brain that can be coexisting in a single person — where he might say things at one point and feel bad about it at another point?
To say that somebody has true colors — that there’s sort of one thing this person is — isn’t nuanced enough with our understanding of modern neuroscience. People are complex in this way. They contain multitudes.
Wired.com: How does the brain decide who wins?
Eagleman: One piece of advice that I give in the book is something that my mother advised me a long time ago: If you’re ever stuck between two options and you just can’t make a decision, flip a coin and then consult your gut feeling about whether you’re disappointed with the way the coin landed.
What you’re doing is a gut check on how the coin toss came out, and that immediately tells you what you need to know. You sort of pretend that you’re committing to it. And then if you go, “Oh shit!” then you know that you should just go with the other choice.
Wired.com: What does your current research about time teach us about how our brains work?
Eagleman: I study time perception and illusions of time, and one of the main questions that I look at is, as we understand time better, what does that tell us about how these systems can break?
One of the experiments we did a few years ago showed that if we inject a small delay between your motor act and some sensory feedback, then when we remove that delay, you’ll have the impression that the feedback happened before you did the act. So, if you press a button and that causes a flash of light to go off, you quickly learn that you are causing the flash of light. Now if we insert a small delay so that when you press the button, there’s a tenth of a second before the flash, you don’t notice that delay, but your brain starts to adjust for it. Then, if we suddenly remove that delay, you’ll hit the button, the flash of light will happen immediately, but you will swear the flash of light happened before you pressed the button.
This is an illusory reversal of action and effect. What people say in this situation is, “It wasn’t me. The flash happened before I pressed the button.” And that struck me as very interesting because this is something that we see in schizophrenia. Schizophrenics will do what’s called credit misattribution where they will make an act, and they will claim that they are not the ones responsible for it.
So that immediately got me thinking that maybe what’s happening in schizophrenia is fundamentally a disorder of time perception. Because if you’re putting out actions into the world and you’re getting feedback, but you’re not getting the timing correct, then you will have cognitive fragmentation, which is what schizophrenics have.
We always talk to ourselves internally and listen to ourselves — we have an internal monologue going on. Now imagine that the talking and the hearing that’s going on internally, imagine that the order got reversed. Well, you would have to attribute that voice to somebody else. That’s an auditory hallucination, and that’s another thing that characterizes schizophrenia.
I’ve been running studies now at the Harris County Psychiatric Center with schizophrenic patients, and it does appear that they have problems in the time domain. So I’m pursuing this hypothesis right now about whether schizophrenia is fundamentally a disorder of time perception. If it is, that’s a big deal because it means that we might be able to develop rehabilitative strategies that involve playing a little video game in front of a computer instead of a pharmaceutical approach.
Wired.com: You argue that brain science could improve our legal system. How would that work?
Eagleman: As part of my day job I direct the Initiative on Neuroscience and Law. What I’m asking is, given the situation that much of what we do and think and act and believe is all generated by parts of our brain that we have no access to, what does this mean when we think about responsibility and blameworthiness when people are bad actors in society?
I’ve been working on this topic for years and it’s become clear to me that our legal system as it stands now is so broken and outdated. It essentially rests on this myth of equality, which says all brains are created equal. As long as you’re over 18 and you’re over an IQ of 70, then all brains are treated as though they have an equal capacity for decision making, for simulating possible futures for understanding consequences and so on. And it’s simply not true.
Along any axis that we measure brains, they are very different. There’s as much variation neurally as there is with people’s external physical characteristics. So an enlightened legal system, and one that’s also more humane and more cost effective, will instead of treating everybody equally and treating incarceration as a one-size-fits-all solution, will do customized sentencing and customized rehabilitation. It will try to understand people better, in terms of what can be done with them and for them. It can have better risk assessment to understand, “How dangerous is this person?”
We still have to take people who break the law off the streets to have a good society, so this doesn’t forgive anybody. But what it means is we have a forward-looking legal system that just worries about the probability of recidivism, or in other words, what is the probability that this person’s behavior will transfer to other future situations? That makes a forward-looking legal system instead of a backward-looking one like we have now, which is just a matter of blame and saying, “How blameworthy are you and we’re going to punish you for that.”
What’s happening more and more in courtrooms is defense lawyers will argue that their client has bad genes, that he was sexually abused as a child or he had in utero cocaine poisoning, so it wasn’t really his fault. It turns out that’s the wrong question to ask, because the interaction of genes and environment is so complex that we will never be able to say how somebody came to be who he is now and whether he had any real choice in the matter of whether he behaved this way or that way. So the only logical thing is to have a forward-looking system that says, “We can’t know the answer to that, all we need to know is how dangerous is this person into the future?”
Beyond that there’s this issue that our prison system has become our de facto mental health care system. The estimates now are that 30 percent of the prison population suffers from some sort of mental illness. It’s much more humane and enlightened and cost effective to have a system that deals with the mentally ill separately, deals with drug addicts separately. and so on. Incarceration is the right solution for some people because it will be a deterrent. But it doesn’t work if your brain’s not functioning properly. If you’re suffering from a psychosis, for example, putting you in prison isn’t going to fix that.
Wired.com: So do we need, next to the jury of your peers, a jury of brain experts?
Eagleman: So here’s what I think. Trials have two phases. There’s the guilt phase, or the fact finding phase, and of course that should always remain with a jury of your peers, there are many reasons for that. But the sentencing phase should be done with statistics and sophisticated risk assessment instruments. And I should mention, these are already underway.
For example, with sex offenders, people have done very good studies where they have followed tens of thousands of sex offenders for years after they are released form prison, and they find out who recidivates and who doesn’t. Then they correlate that with all of these things they can measure about the person. And it turns out that that gives really good predictive power about who’s likely to recidivate and who’s not.
Now I need to specify here that we will never be able to know whether any individual will commit a crime again or not, because life’s too complicated and crime is often circumstantial. Nonetheless, it is the case that some people are more dangerous than others. And these statistical tests are incredibly powerful tools for understanding who on average is going to be more dangerous than whom and thereby how long we should sentence them for.
Betsy Mason is the editor of Wired Science.
Follow @betsymason on Twitter.