Skip to Main Content

The Undiscovered Mind

How the Human Brain Defies Replication, Medication, and Explanation

About The Book

GRAY MATTER UNDER INVESTIGATION

In his acclaimed book The End of Science, John Horgan ignited a firestorm of controversy about the limits of knowledge in a wide range of sciences. Now in The Undiscovered Mind he focuses on the single most important scientific enterprise of all -- the effort to understand the human mind.
Horgan takes us inside laboratories, hospitals, and universities to meet neuro-scientists, Freudian analysts, electroshock therapists, behavioral geneticists, evolutionary psychologists, artificial intelligence engineers, and philosophers of consciousness. He looks into the persistent explanatory gap between mind and body that Socrates pondered and shows that it has not been bridged. He investigates what he calls the "Humpty Dumpty dilemma," the fact that neuroscientists can break the brain and mind into pieces but cannot put the pieces back together again. He presents evidence that the placebo effect is the primary ingredient of psychotherapy, Prozac, and other treatments for mental disorders. As Horgan shows, the mystery of human consciousness, of why and how we think, remains so impregnable that to expect the attempts of scientific method and technology to penetrate it anytime soon is absurd.

Excerpt

Chapter One: Neuroscience's Explanatory Gap

By 1979 Freudian psychology was treated as only an interesting historical note. The fashionable new frontier was the clinical study of the central nervous system....Today the new savants probe and probe and slice and slice and project their slides and regard Freud's mental constructs, his "libidos," "Oedipal complexes," and the rest, as quaint quackeries of yore, along the lines of Mesmer's "animal magnetism."

-- Tom Wolfe, In Our Time

In Phaedo Plato described the last hours of Socrates, who had been imprisoned and sentenced to death by Athenian authorities. Socrates told friends who had assembled in the prison why he had accepted his death sentence rather than fleeing. At one point, Socrates ridiculed the notion that his behavior could be explained in physical terms. Someone who held such a belief, Socrates speculated, would claim that

as the bones are lifted at their joints by the contraction or relaxation of the muscles, I am able to bend my limbs, and that is why I am sitting here in a curved posture...and he would have a similar explanation of my talking to you, which he would attribute to sound, and air, and hearing...forgetting to mention the true cause, which is, that the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence.

This is the oldest allusion I know of to what some modern philosophers call the explanatory gap. The term was coined by Joseph Levine, a philosopher at North Carolina State University. In "Materialism and Qualia: The Explanatory Gap," published in Pacific Philosophical Quarterly in 1983, Levine addressed the puzzling inability of physiological theories to account for psychological phenomena. Levine's main focus was on consciousness, or "qualia," our subjective sensations of the world. But the explanatory gap could also refer to mental functions such as perception, memory, reasoning, and emotion -- and to human behavior.

The field that seems most likely to close the explanatory gap is neuroscience, the study of the brain. When Plato wrote Phaedo, no one even knew that the brain is the seat of mental functioning. (Aristotle's observation that chickens often continue running after being decapitated led him to rule out the brain as the body's control center.) Today neuroscientists are probing the links between the brain and the mind with an ever more potent array of tools. They can watch the entire brain in action with positron emission tomography and magnetic resonance imaging. They can monitor the minute electrical impulses passing between individual nerve cells with microelectrodes. They can trace the effects of specific genes and neurotransmitters on the brain's functioning. Investigators hope that eventually neuroscience will do for mind-science what molecular biology did for evolutionary biology, placing it on a firm empirical foundation that leads to powerful new insights and applications.

Neuroscience is certainly a growth industry. Membership in the Society for Neuroscience, based in Washington, D.C., soared from 500 in 1970, the year it was founded, to over 25,000 in 1998. Neuroscience journals have proliferated, as has coverage of the topic in premier general-interest journals such as Science and Nature. When Nature launched a new periodical, Nature Neuroscience, in 1998, it proclaimed that neuroscience "is one of the most vigorous and fast growing areas of biology. Not only is understanding the brain one of the great scientific challenges of our time, it also has profound implications for society, ranging from the basis of memory to the causes of Alzheimer's disease to the origins of emotions, personality and even consciousness itself." Neuroscience is clearly advancing; it is getting somewhere. But where?

I once asked Gerald Fischbach, the head of Harvard's Department of Neuroscience and a former president of the Society for Neuroscience, to name what he considered to be the most important accomplishment of his field. He smiled at the naiveté of the question. Neuroscience is a vast enterprise, he pointed out, which ranges from studies of molecules that facilitate neural transmission to magnetic resonance imaging of whole-brain activity. It is impossible, Fischbach added, to single out any particular finding, or even a set of findings, emerging from neuroscience. The field's most striking characteristic is its production of such an enormous and still-growing number of discoveries. Researchers keep finding new types of brain cells, or neurons; neurotransmitters, the chemicals by which neurons communicate with each other; neural receptors, the lumps of protein on the surface of neurons into which neurotransmitters fit; and neurotrophic factors, chemicals that guide the growth of the brain from the embryonic stage into adulthood.

Not long ago, Fischbach elaborated, researchers believed there was only one receptor for the neurotransmitter acetylcholine, which controls muscle functioning; now at least ten different receptors have been identified. Experiments have turned up at least fifteen receptors for the so-called GABA (gamma-amino butyric acid) neurotransmitter, which inhibits neural activity. Research into neurotrophic factors is also "exploding," Fischbach said. Researchers had learned that neurotrophic factors continue to shape the brain not only in utero and during infancy but throughout our life span. Unfortunately, neuroscientists had not determined how to fit all these findings into a coherent framework. "We're not close to having a unified view of human mental life," Fischbach said.

Fischbach was spotlighting one of his field's most paradoxical features. Although reductionist is often used as a derogatory term, science is reductionist by definition. As the philosopher Daniel Dennett once put it, "Leaving something out is not a feature of failed explanations, but of successful explanations." Science at its best isolates a common element underlying many seemingly disparate phenomena. Newton discovered that the tendency of objects to fall to the ground, the swelling and ebbing of seas, and the motion of the moon and planets through space could all be explained by a single force, gravity. Modern physicists have demonstrated that all matter consists basically of two types of particles, quarks and electrons. Darwin showed that all the diverse species on earth were created through a single process, evolution. In the last half-century, Francis Crick, James Watson, and other molecular biologists revealed that all organisms share essentially the same DNA-based method of transmitting genetic information to their offspring. Neuroscientists, in contrast, have yet to achieve their reductionist epiphany. Instead of finding a great unifying insight, they just keep uncovering more and more complexity. Neuroscience's progress is really a kind of anti-progress. As researchers learn more about the brain, it becomes increasingly difficult to imagine how all the disparate data can be organized into a cohesive, coherent whole.

The Humpty Dumpty Dilemma

In 1990, the Society for Neuroscience persuaded the U.S. Congress to designate the 1990s the Decade of the Brain. The goal of the proclamation was both to celebrate the achievements of neuroscience and to support efforts to understand mental disorders such as schizophrenia and manic depression (also known as bipolar illness). One neuroscientist who opposed the idea was Torsten Wiesel, who won a Nobel prize in 1981 and went on to become president of Rockefeller University in New York. (He stepped down to return to research at the end of 1998.) Born and raised in Sweden, Wiesel is a soft-spoken, reticent man, but when I interviewed him at Rockefeller University in early 1998, he became heated at the mention of the "Decade of the Brain."

The idea was "foolish," he grumbled. "We need at least a century, maybe even a millennium," to comprehend the brain. "We still don't understand how C. elegans works," he continued, referring to a tiny worm that serves as a laboratory for molecular and cellular biologists. Scientists had discovered some "simple mechanisms" in the brain, but they still did not really understand how the brain develops in the womb and beyond, how the brain ages, how memory works. "We are at the very early stage of brain science." (Nevertheless, in 1998 behavioral scientists -- a broad category including psychologists, geneticists, anthropologists, and others -- began lobbying for the decade beginning in the year 2000 to be named the Decade of Behavior.)

Wiesel himself participated in one of neuroscience's paradigmatic discoveries. Like many other scientific triumphs, this one resulted from both hard work and serendipity. In 1958 Wiesel and another young neuroscientist, David Hubel, were conducting experiments on the visual cortex of a cat in a "small, dingy, windowless basement lab" (according to one account) at the Johns Hopkins Medical School. After implanting an electrode in the cat's visual cortex, they projected images on the cat's retina with a slide projector attached to an ophthalmoscope. They presented the cat with two simple stimuli: a bright spot on a dark background and a dark spot on a bright background. When the electrode detected an electric discharge from a neuron, a device similar to a Geiger counter would emit a loud click.

Wiesel and Hubel were getting inconclusive results when one of their slides became stuck in the projector. After unjamming the slide, they slowly pushed it into its slot. Suddenly the electrode monitor started firing like "a machine gun." Wiesel and Hubel eventually realized that the neuron was responding to the edge of the slide moving across the cat's field of vision. In subsequent experiments, they found neurons that respond to lines only at specific orientations relative to the position of the retina. As the investigators moved the electrode through the visual cortex, the orientation of the lines to which the neurons responded kept changing, like a minute hand circling a clock. In 1981 Wiesel and Hubel received a Nobel prize for their research.

These findings are emblematic of a larger trend in neuroscience. Arguably the most important discovery to emerge from the field is that different regions of the brain are specialized for carrying out different functions. This insight is hardly new; Franz Gall said as much two centuries ago when he invented phrenology (which degenerated into a pseudoscientific method for determining character from the shape of the skull). But modern researchers keep slicing the brain into smaller and smaller pieces, with no end to the process in sight.

As recently as the 1950s, many scientists believed that memory is a single -- albeit highly versatile -- function. The researcher Karl Lashley was a prominent advocate of this view. He argued that memories are processed and stored not in any single location but throughout the brain. As evidence, he pointed to experiments in which lesions in the brains of rats did not significantly affect their ability to remember how to navigate mazes. What Lashley failed to realize was that rats have many redundant methods for navigating a maze; if the rat's ability to recollect visual cues is damaged, it may rely instead on olfactory or tactile cues.

Subsequent experiments involving both humans and other animals revealed many types of memory, each underpinned by its own region of the brain. The two major categories of memory are explicit, or declarative, memory, which involves conscious recollection; and implicit, or nonconscious, memory, which remains below the level of awareness but nonetheless affects behavior and mental functioning.

Memory has been divided into other categories as well, some of which overlap. Short-term memory, which is sometimes called working memory, allows us to glance at a telephone number and recall it just long enough to dial it a few seconds later. Long-term memory keeps that same telephone number in permanent storage, ready to be accessed when needed. Procedural memory lets us acquire and perform such reflexive skills as driving a car, touch-typing, or playing tennis. Episodic memory enables us to recall specific events.

Experiments have also identified a phenomenon known as priming, which is similar to the old notion of subliminal influence. Subjects are exposed to a stimulus, such as a sound or image, so briefly that they never become consciously aware of it and cannot recall it later. Yet tests show that the stimulus has been imprinted on the brain at some level. In one set of experiments, subjects are shown a list of words too briefly for it to be stored in short-term memory. Later the subjects are asked to play a game similar to the television game Wheel of Fortune. Given the clue "o-t-p-s," they must guess what the full word is. Subjects who have previously been exposed to a list of words containing octopus are much more likely to guess correctly, even though they cannot explicitly recall whether the list included octopus.

Technologies such as positron emission tomography (PET) and magnetic resonance imaging (MRI) have accelerated the fragmentation of the brain and mind. PET scans monitor short-lived radioactive isotopes of oxygen that have been injected into the blood. High levels of the isotope indicate increased blood flow and thus increased neural activity. MRI dispenses with the need for an injection of a radioactive substance. A powerful electromagnetic pulse causes certain atoms to align in a particular direction, like iron filings arranged around a magnet. When the magnetic field is relaxed, the atoms emit radiation at characteristic frequencies.

Imaging studies often focus on subjects performing some task: solving mathematical puzzles, sorting images according to category, memorizing lists of words. Those regions of the brain that are most active are assumed to be crucial to the activity. Karl Friston, an MRI specialist at the Institute of Neurology in London, compared this cataloguing of neural "hot spots" to Darwin's patient gathering of data on animals from around the world. "Without this catalogue of functional specialization," Friston said, "I don't think that one's going to go far in assembling a useful and accountable theory of brain organization."

But Friston felt that the push toward localization had gone too far. Too many studies simply associate a given region with a given function "without any reference to any conceptual framework or proper or deep understanding of the functional architecture of the brain." Different parts of the brain are also clearly interconnected, and understanding these neural connections is crucial to understanding the mind. "Looking at the correlations between different areas," Friston said, "has been very much underemphasized."

Rodolfo Llinas, a neuroscientist at New York University, was even more critical of the manner in which neuroimaging is being used, particularly in psychiatry. "You find somebody who has a particular problem, and you see a red spot on the front of the cortex and you say, 'Okay, so that spot of the cortex is the site where you have bad thoughts.' It's absolutely incredible! The brain does not function as a single-area organ!" Llinas compared these studies to phrenology, the eighteenth-century pseudoscience that divided the brain into discrete chunks dedicated to specific functions. "You have a patient, and you put the patient into the instrument, and you write a paper, because you can just see it," Llinas said. "It's phrenology!"

Llinas recalled that neuroscience previously went through a phase when researchers injected drugs into monkeys or rats and published a paper on the results, whether or not the results were meaningful. "We're about in those terms" with the new imaging technologies, Llinas asserted. "We tend to publish a few cases and to say, 'This is how it works, because look at the beautiful picture it got.'" But "then you go into the details, and it becomes a bit of a mirage."

As neuroscientists keep subdividing the brain, one question looms ever larger: How does the brain coordinate and integrate the workings of its highly specialized parts to create the apparent unity of perception and thought that constitutes the mind? The Harvard neuroscientist David Hubel, whose experiments with Torsten Wiesel helped to create the current crisis in neuroscience, stated at the end of his book Eye, Brain and Vision:

This surprising tendency for attributes such as form, color, and movement to be handled by separate structures in the brain immediately raises the question of how all the information is finally assembled, say for perceiving a bouncing red ball. It obviously must be assembled somewhere, if only at the motor nerves that subserve the action of catching. Where it's assembled, and how, we have no idea.

This conundrum is sometimes called the binding problem. I would like to propose another term: the Humpty Dumpty dilemma. It plagues not only neuroscience but also evolutionary psychology, cognitive science, artificial intelligence -- and indeed all fields that divide the mind into a collection of relatively discrete "modules," "intelligences," "instincts," or "computational devices." Like a precocious eight-year-old tinkering with a radio, mind-scientists excel at taking the brain apart, but they have no idea how to put it back together again.

Patricia Goldman-Rakic's Explanatory Gap

One neuroscientist striving to solve the Humpty Dumpty dilemma is Patricia Goldman-Rakic, a professor at the Yale University School of Medicine. Goldman-Rakic, who heads one of the most sophisticated neuroscience laboratories in the world, studies not human brains but those of a close relative, the macaque monkey. Goldman-Rakic calls herself a "systems neuroscientist." By working on the frontal cortex, which is thought to be the seat of reasoning ability, decision making, and other higher cognitive functions, she hopes to show how psychology, psychiatry, and other macro-level approaches to the mind can be integrated with the more reductionist models focusing on neural, genetic, and molecular processes.

A major focus of her research is working memory. Like the random-access memory of a computer, which makes information available for instant use, working memory allows us to maintain the thread of a conversation, read a book, play a game of cards, or perform simple arithmetic calculations in our heads. Many neuroscientists think a better understanding of working memory will help to solve mysteries such as the binding problem, free will, consciousness, and schizophrenia. No other neuroscientist is better positioned to close the explanatory gap than Goldman-Rakic, and yet I never felt the explanatory gap more vividly, even viscerally, than when I visited her laboratory.

The animal rights movement has turned laboratories such as Goldman-Rakic's into fortresses. Visitors must check in with an armed security guard at the entrance of the Yale Medical School; they are escorted through two steel doors, each with a small window, which can be opened only with a magnetic key. Within is a large suite of rooms containing monkeys, microscopes, surgical equipment, and all the latest instruments of the biotechnology revolution. In one room, a young woman was painstakingly slicing the frozen, walnut-size brain of a monkey into transparently thin sheets with what looked like a miniature deli slicer. A young man in an adjacent office was examining cross-sections in a microscope and tracing on paper the fantastically intricate connections between the neurons. Later he would transfer these tracings into a computer to form high-resolution, three-dimensional maps of neural circuitry.

Goldman-Rakic and her colleagues have perfected a technique that provides the same information as a PET scan but with much higher resolution. After being injected with radioactive chemicals that help to metabolize glucose, the monkeys perform certain tasks. Investigators quickly sacrifice the monkeys and freeze their brains. By measuring the levels of radioactivity in different regions of the brain, the researchers can determine which regions contributed most to the performance of the task.

Another room houses an apparatus for probing the working memory of monkeys. The monkey sits on a chair in a box-shaped steel frame facing a screen on which the researchers project signals and images. Its head is fixed in place with bolts that are screwed into its skull and attached to the frame. A sensor implanted in the monkey's eye -- the wire from which passes through a plug in the monkey's skull to a recording device -- allows the researchers to track eye movements. Electrodes implanted in the monkey's frontal cortex monitor the firing of individual neurons.

The master of this rather forbidding domain is a petite, elegantly coiffed woman who, on the day of my visit, wore a white cashmere sweater and gold earrings. When we sat down to discuss her work, Goldman-Rakic was for the most part guarded and reserved, but now and then to emphasize a point she leaned toward me and gripped my forearm. The goal of her research, she said, is to understand such higher cortical functions as memory, perception, and decision making. For those interested in higher cortical functions, the macaque monkey serves as an "unexcelled" model, she said. Monkeys are capable of cognition that is fundamentally similar to that of humans, though obviously not as sophisticated. When injected with amphetamines, monkeys even display behavior that resembles that of schizophrenic humans. "We're at the edge," Goldman-Rakic said, "making discoveries that are of great moment for understanding humans."

Experiments on monkeys have helped to illuminate working memory, which Goldman-Rakic described as a "mental sketchpad" or "glue" that helps to provide continuity of thought. Working-memory capacity correlates strongly with general intelligence and reading ability. People with a weak working memory have a harder time understanding complex sentences, in which subjects and verbs are separated by embedded clauses. Schizophrenia may also stem from a deficit in working memory. One major symptom of schizophrenia is "thought derailment," Goldman-Rakic explained. Schizophrenics keep losing their train of thought; they are therefore excessively sensitive to and easily overwhelmed by incoming perceptions.

Her research could provide insights into both normal and deranged human cognition and thus point the way to better pharmacological or behavioral therapies. She and her coworkers were studying how dopamine, serotonin, and other neurotransmitters inhibit or facilitate cortical functioning. "Many diseases involve dopamine: schizophrenia, Parkinson's disease, possibly childhood disorders like attention deficit syndrome." Drugs such as Prozac suggested that serotonin can have a profound effect on mood. Prozac "changes a person's life from blackness to lightness, all right? Now, why?" Her team had just taken one step toward the answer by showing that certain cortical cells react differently to incoming signals depending on serotonin levels.

Could her research lead to drugs that boost memory and intelligence? "Absolutely! Definitely!" she exclaimed. "There are drugs that do this already, but these are drugs that are not necessarily always effective, or they have side effects." She emphasized that her group is not pursuing such drugs as an end in themselves. "The goal of my research is not to support the pharmaceutical industry, at all. It's to learn how the brain works, and particularly how the portions of the brain or the systems that are involved in cognition work."

Cognition, explained Goldman-Rakic, entails much more than merely responding automatically to a stimulus, like a driver stopping at a red light and going on green. "Humans have lots of habitual responses, automatic responses, reflexive responses. But that's not what makes them human. What makes them human is the flexibility of their responses, their ability not to respond as well as to respond, their ability to reflect, and their ability to draw upon their experience, to guide a particular response at a particular moment." Was she really talking about free will? "I could use that terminology," Goldman-Rakic replied, dropping her voice and speaking in a conspiratorial mock whisper, "if I really were disinhibited."

She fetched an article describing one of her experiments and opened it on the table in front of us. In the experiment, the monkey was trained to keep his eyes focused on the center of a screen while the researchers briefly shone a light on one of the screen's edges or corners. The monkey had learned to wait a few seconds after the light went off before looking directly at where the light had been. During these few seconds, the monkey had to store the light's location in his working memory.

Goldman-Rakic pointed to one of the article's graphs, which represented the activity of neurons that started firing when the light appeared and kept firing after the light vanished. She noted that Torsten Wiesel and David Hubel and most other neuroscientists focused on neurons that responded directly to external stimuli. "This," Goldman-Rakic said, jabbing her finger at the graph, "is sooooo different." These neurons were firing in the absence of an external stimulus; this neural activity corresponded not to a real image but to the memory, or internal representation, of an image. "This," she continued dramatically, "is the cellular correlate of the mechanism for holding online information." She let her words sink in for a moment and added, "So here you have the neurophysiology of cognition."

There was a long pause, during which both of us stared at the graph. Goldman-Rakic started laughing. "You frown so!" she said. I confessed that I was having a hard time grasping the significance of her work. Of all the topics I had covered as a journalist, I said, neuroscience was the hardest -- harder even than particle physics. Goldman-Rakic chortled and called out to a young woman walking through the room, "He's saying that neuroscience is harder than particle physics!" Turning back to me she said, "I'm trying to make it easy for you!"

My problem, I said, was making the transition from these graphs showing the firing rates of neurons to big concepts like memory and cognition and free will. I could understand reductionism when it came to particle physics, but when it came to the human mind, I felt I was missing something. "I want to kill you," she said. "Here I am putting all this energy into explaining this, and you say it's too hard." But surely I wasn't the only person who had ever reacted in this way to her explanations, I replied; philosophers even had a term for this reaction, the explanatory gap.

"I think it's in your head that there's an explanatory gap," said Goldman-Rakic firmly. "The moment-to-moment changes in the cells and the brain and all of that are certainly not worked out. And what makes you you, and me me, I'm not going to explain today, and maybe never." Scientists could not understand the origin of the universe either, she said. Nevertheless, she assured me, "we are on the road to understanding human cognition."

Others have sensed an explanatory gap when confronting the research of Goldman-Rakic and her colleagues in neuroscience. Shortly before I left Scientific American in 1997, I edited an article on working memory in which Goldman-Rakic was prominently featured. The article's author was another staff writer for Scientific American, Timothy Beardsley, a veteran science journalist with a doctorate in animal behavior from the University of Oxford. During the editing process, Beardsley confessed that he had never encountered work more difficult to comprehend and present in a coherent, satisfactory form. He felt as though he was missing something.

Several months after the publication of Beardsley's article, "The Machinery of Thought," Scientific American printed a letter that touched on the problem with which Beardsley and I had struggled. The letter complained that the research described in Beardsley's article "tells us only where something happens in the brain, not what the actual mechanisms are for recognizing, remembering and so on. And that, of course, is what we really want to know."

Getting in Touch with Emotions

Even if they unravel the mechanisms underlying working memory and other cognitive functions, neuroscientists must face another problem: How does emotion fit into the puzzle? Until recently many neuroscientists sought to sidestep emotion in their experiments, treating it as an annoying source of experimental noise and distortion rather than a fundamental part of human nature. Neuroscientists have followed the lead of cognitive scientists, who have tried to understand those information-processing functions that can be most easily duplicated in computers, such as vision, recollection, speech recognition, and reasoning.

By avoiding emotion, neuroscientists and cognitive scientists have created a peculiarly one-dimensional picture of the mind, according to Joseph LeDoux, a neuroscientist at New York University. Cognitive science "is really a science of only a part of the mind, the part having to do with thinking, reasoning, and intellect," LeDoux complained in his 1996 book, The Emotional Brain. "It leaves emotions out. And minds without emotions are not really minds at all. They are souls on ice -- cold, lifeless creatures devoid of any desires, fears, sorrows, pains, or pleasures."

LeDoux, himself a cool, controlled man with deep-set eyes and a carefully trimmed beard, has demonstrated that at least one emotion, fear, can be approached empirically. Unlike language or other cognitive functions unique to humans, LeDoux pointed out, fear is a biological phenomenon whose roots reach back far into the history of life. The neural circuitry and processes that underlie fear have been highly conserved through evolution; thus experiments on rats and other animals may reveal much about humans. The amygdala, which is crucial to the fear response, is found not only in humans and primates but also in rats.

"The fear system is very, very simple," LeDoux told me. "You've got a stimulus that comes in through standard input channels, goes to the amygdala and goes out through the output channels," he said. Early studies of fear responses had produced confusing results because the experiments were too complex. "Every time you change the experiment, you change the way the brain accomplishes the task. So the key in figuring out the fear system is to strip it down to a simpler model."

LeDoux has carried out experiments in which rats have been conditioned to associate a certain sound, such as a musical tone, with an unpleasant sensation, such as an electric shock. The initial response of rats and many other animals to such a stimulus is to freeze, an appropriate tactic for an animal threatened by a predator. The freeze response is an innate, reflexive function. LeDoux and his colleagues showed that damage to a minute structure within the amygdala, called the lateral nucleus, prevented rats from learning to freeze in response to the tone preceding an electric shock. The cognitive ability of the rats was unimpaired in other respects.

LeDoux was trying to unravel the circuitry required for more complex fear-related behavior, which is sometimes called instrumental learning. For example, when a rat learns that freezing does not prevent him from being shocked, he tries avoidance -- moving to a different part of the cage or climbing up its sides. At this point, the rat makes the transition from being an emotional reactor to an actor, LeDoux said, capable of making choices and trying different strategies.

Psychologists once believed that the subjective sensation of fear is the first component of the fear response; increased heart rate, sweating, and other physiological symptoms were thought to be triggered by the subjective sensation. LeDoux contended that the opposite is probably true; physiological symptoms occur first and then initiate the subjective sensation of fear. In many cases, moreover, the fear response might never generate a conscious sensation. Our conscious, subjective feelings "are red herrings, detours, in the scientific study of emotions," LeDoux has written.

LeDoux felt that too much attention had been paid to consciousness lately. "It would surely get you the Nobel prize if you figured it out," he told me, "but I don't think it would tell us what we need to know" about the mind. Although consciousness is often equated with the mind, most mental processes occur beneath the level of awareness, LeDoux pointed out. Consciousness, moreover, is a relatively recent innovation of evolution. "Basically the brain is unconscious. Somewhere in evolution consciousness evolved as a module. It's connected up to some other parts of the brain, but not the rest of it."

Explaining consciousness is not as important as understanding how the brain draws on both genes and experience to create a self, a personal identity, in each individual. "That to me is the big question: how our brain makes us who we are. Explaining consciousness wouldn't explain that." The key to this issue is understanding how both nature and nurture affect the brain's wiring. "What's often overlooked is that nature and nurture speak the same language, which is the synaptic language," LeDoux said. Ultimately all influences on personality, genetic or experiential, become manifest at the level of the connections between neurons.

LeDoux doubted whether any single theory would account for emotion. There are many aspects of emotion, he noted. "There's an evolutionary component, there's a cognitive component, a behavioral component. It's just a question of what the balance in the particular situation is." Cognitive theories tend to focus on conscious emotional processes; evolutionary theories emphasize innate emotional responses; behavioral theories stress the role of environmental conditioning. "In any particular emotional episode, it's not a matter of which one is right but which one explains which part of the episode." Moreover, each emotion probably requires a separate explanation; the mechanisms underlying fear are probably quite different from those underlying lust or hatred.

LeDoux summarized the research that he and others have done on emotion, and particularly fear, in The Emotional Brain. He also cautiously suggested that investigations of the neurobiology of fear might at some point yield better treatments for human anxiety disorders. LeDoux expected psychiatrists to dismiss his rat experiments as irrelevant to their work. But to his surprise, psychiatrists responded to his book enthusiastically -- almost too enthusiastically, LeDoux suggested. "It's been almost this uncritical acceptance," he explained. "'Yes, let's go! This is the answer!' They seem so desperate. I don't think I have the answers in my book. I just threw out some ideas."

Like Gerald Fischbach, Torsten Wiesel, and other leading neuroscientists, LeDoux readily acknowledges the shortcomings of his field. He once stated, "We have no idea how our brains make us who we are. There is as yet no neuroscience of personality. We have little understanding of how art and history are experienced by the brain. The meltdown of mental life in psychosis is still a mystery. In short, we have yet to come up with a theory that can pull all this together. We haven't yet had a Darwin, Einstein or Newton."

Then LeDoux suggested that neuroscience might not need a unifying theory:

Maybe what we need most are lots of little theories. It would be great to know how anxiety or depression works, even if we don't have a theory of mental illness. And wouldn't it be wonderful to know how we experience a wonderful piece of music (be it rock or Bach), even in the absence of a theory of perception. And to understand fear or love in the absence of a theory of emotion in general wouldn't be so bad either. The field of neuroscience is in a position to make progress on these problems, even if it doesn't come up with a theory of mind and brain.

Gagian Neuroscience

Neuroscience might find it difficult to produce even LeDoux's "little theories." A fundamental impediment to progress in neuroscience -- or in any other mind-related field for that matter -- is the enormous variability of all brains and minds. This problem emerged early on from studies of brain-damaged human subjects, who have long provided clues about the links between the brain and the mind. Let's call this research Gagian neuroscience in honor of one of its most famous subjects, Phineas Gage. The twenty-five-year-old Gage was supervising the construction of a railroad line in Vermont in 1848 when an accidental explosion blew an iron bar more than a yard long into his cheek and clear through the top of his head. Not only did Gage live; he remained completely lucid. About an hour later, he was examined by a physician named Edward Williams. Williams recalled that during the examination, Gage "talked so rationally and was so willing to answer questions, that I directed my inquiries to him in preference to the men who were with him at the time of the accident, and who were standing about at this time." A year later another doctor pronounced Gage "completely recovered."

Lofty theoretical edifices were erected upon Gage's injury. For several decades, his case was seen as a setback to the contention of the phrenologist Franz Gall and others that the brain is divided into subsystems dedicated to different tasks, such as speech, movement, and vision. Early examinations of Gage suggested (wrongly, as it turned out) that his brain had been damaged in regions dedicated to language and motor control -- and yet these functions remained intact. The conclusion was that the brain is not modular (to use the modern term); rather, it is an undifferentiated mass that functions holistically.

Twenty years after the accident, a physician named John Harlow offered a quite different interpretation of Gage's case. Harlow, who had examined Gage many times over the years, revealed that Gage's personality, if not his functional ability, had changed profoundly after his accident. A previously fastidious, thoughtful, and responsible man, Gage had become "fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires....In this regard his mind was radically changed, so decidedly that his friends and acquaintances said he was 'no longer Gage.'" Gradually Gage's case came to be seen as a corroboration rather than refutation of the modularity hypothesis. The parts of Gage's brain that had sustained the most damage were his frontal lobes, which are now believed to be the seat of such lofty cognitive functions as moral reasoning and decision making.

Gagian neuroscience has supported the view of the mind as an assortment of modules linked to extremely specific functions and traits. Speech disorders caused by brain damage are lumped under the umbrella term aphasia. Some aphasics lose the ability to recall the names of people, or of animals, or of inanimate objects. Others can no longer decode verb endings. Aphasics may be able to have a conversation but not to read or write, or vice versa. Brain damage can result in dramatic additions to, rather than subtractions from, a person's psyche. Physicians have reported more than thirty cases of a condition known as gourmand syndrome, in which damage to the right frontal lobe results in an obsession with fine food. A Swiss political journalist made the most of his condition; after recovering from his stroke, he started writing a food column.

A major source of data for Gagian neuroscientists has been patients whose epilepsy is so severe that it can be treated only by severing the corpus callosum, the bundle of nerves connecting the two hemispheres of the brain. (The operation prevents the uncontrolled neural activity that precipitates seizures from engulfing the entire brain.) By studying these patients, the Nobel laureate Roger Sperry and others determined in the 1960s and later that each hemisphere serves different functions. The left hemisphere exerts primary control over language and speech, while the right hemisphere predominates in tasks involving vision and motor skills. The burgeoning field of split-brain research soon spawned the now-familiar pop culture clichés: our left brain embodies our "rational" self and our right brain our spontaneous, "creative" self.

A slew of self-help books -- such as Drawing on the Right Side of the Brain and Right Brain Sex -- offered advice on how to escape the confines of our stuffy left brains and become free-spirited right-brain types. Newspapers advertised subliminal tapes that supposedly expanded mental power by delivering different motivational messages to each hemisphere at the same time. Educators proposed revamping curricula to "unleash the right side" of students' brains. Scholars reinterpreted history through the lens of split-brain research; according to one historian, Stalin was a "left-hemispheric leader," and Hitler had a "right-hemispheric temperament."

Even those who should have known better, such as Michael Gazzaniga of Dartmouth University, a pioneer of Gagian neuroscience, contributed to the hype. In his 1985 book The Social Brain, Gazzaniga presented a critique of the welfare state based on his interpretation of split-brain experiments. More than a decade later Gazzaniga was expressing doubts about even some of the most cautious claims concerning the right and left hemispheres. In an article published in Scientific American in 1998, Gazzaniga emphasized the hazards of generalizing about the brain based on relatively few cases. People who suffer identical forms of brain damage may exhibit completely different effects. Moreover, the brain's plasticity makes it difficult to reach firm conclusions about the effects of brain damage on even the same person; individuals, after all, change over time.

Severe damage to the left hemisphere usually results in permanent impairment of speaking ability, but that was not true of a patient named J.W. Although an operation on his left hemisphere left him mute, J.W. acquired the ability to speak by means of his right hemisphere -- thirteen years after his original surgery. A British boy named Alex presents an even more remarkable case. Alex was born with a left hemisphere so malformed that he suffered from constant epileptic seizures. He was also completely mute. When Alex was eight, surgeons removed his left hemisphere to alleviate his epilepsy. Although physicians advised his parents not to expect improvement in his other symptoms, Alex began talking ten months later, and by the age of sixteen he was speaking fluently.

Gagian neuroscience highlights a major obstacle to understanding the human brain. The putative cornerstone of science is the ability to replicate experiments and thus results. But replicability poses a special challenge to mind-science because all brains, and all mental illnesses, differ in significant ways. This lesson emerges quite clearly from the history of lobotomies, according to Jack Pressman, a historian of medicine at the University of California at San Francisco. In his 1998 book, Last Resort: Psychosurgery and the Limits of Medicine, Pressman noted how difficult it was to draw firm conclusions about the value of the lobotomy, which treats mental illness by destroying the brain's prefrontal lobes. Some patients seemed to benefit from the procedure; others were devastated. Some patients became wildly uninhibited, like Phineas Gage; others were left virtually catatonic. Pressman concluded: "Because every individual is comprised of a singular combination of physiology, social identity, and personal values, in effect each patient constitutes a unique experiment."

In September 1998, scientists from around the world gathered in Cavendish, Vermont, to commemorate the 150th anniversary of the accident of Phineas Gage. A report on the meeting in Science noted that researchers have still not settled the questions originally raised by Gage's case; scientists are "debating whether the frontal cortex functions as a unit or subdivides its duties." One participant in the meeting bravely declared that "the truth probably lies somewhere in the middle."

The Faddishness of Psychology

The skepticism of Socrates about the application of physical theories to human thought and behavior has proved to be extraordinarily prescient. Neuroscience remains peculiarly disconnected with higher-level approaches to the mind, such as psychiatry. The British neurophysiologist Charles Sherrington, who won a Nobel prize in 1932 for his studies of the nervous system, once wrote, "In the training and in the exercise of medicine a remoteness abides between the field of neurology and that of mental health, psychiatry. It is sometimes blamed to prejudice on the part of one side or the other. It is both more grave and less grave than that. It has a reasonable basis. Physiology has not enough to offer about the brain in relation to the mind to lend the psychiatrist much help."

The ascent of psychopharmacology in the 1960s led to hopes that mental disorders could be explained in biochemical terms. Because antipsychosis drugs such as chlorpromazine and reserpine boost levels of the neurotransmitter dopamine in the brain, psychiatrists began to view schizophrenia as a dopamine-related disorder rather than as a consequence of psychic trauma. The advent of antidepressant medications called monoamine oxidase inhibitors and tricyclics, which elevate levels of the neurotransmitters norepinephrine and serotonin, led to speculation that depression stems from a deficit of these neurotransmitters. The growing popularity of selective serotonin reuptake inhibitors such as Prozac has shifted the focus to serotonin alone as the key to depression. (As of yet, there is no accepted explanation for lithium's effect on manic depression.) But even the originators of these neurotransmitter models of mental illness acknowledge their weaknesses. Given the ubiquity of a neurotransmitter such as serotonin and the multiplicity of its functions, it is almost as meaningless to implicate it in depression as it is to implicate blood. Moreover, as Chapter 4 will show, medications for mental illness are not as effective as they are often said to be.

Neuroscientists have sought to find physiological correlates of schizophrenia and other disorders by probing the brains of the mentally ill with PET and other imaging technologies. So far these efforts have yielded frustratingly ambiguous results. Typical was a widely publicized MRI study performed in 1990 at the National Institute of Mental Health. The researchers compared the brains of fifteen schizophrenics to the brains of their nonschizophrenic identical twins. All but one of the schizophrenics had larger ventricles -- fluid-filled cavities in the center of the brain -- than the nonschizophrenics. Lewis Judd, then the director of the National Institute of Mental Health, hailed the study as a "landmark" that provided "irrefutable evidence that schizophrenia is a brain disorder." Unfortunately, the researchers could not establish whether the enlarged ventricles were a cause or an effect of schizophrenia -- or of the drugs used to treat it. Follow-up studies also showed that many normal people have relatively large ventricles, and many schizophrenics do not.

There has also been a troubling schism between neuroscience and psychology. Neuroscientists are "making fundamental discoveries of great importance," the Harvard psychologist Jerome Kagan once remarked. "But the observable behavioral events to which these individual discoveries apply are often unclear...the big prize is understanding the relation between molecular and behavioral events. Each domain is moderately autonomous."

This aspect of the explanatory gap was touched on in a 1998 article in American Scientist, "Psychological Science at the Crossroads." The three authors, all psychologists, searched for references to neuroscience in the four most influential psychology journals: American Psychologist, Annual Review of Psychology, Psychological Bulletin, and Psychological Review. They found that the enormous increase in neuroscience research was not reflected in psychology citations. "Clearly neuroscience is rising in prominence but, according to our measures, not within mainstream psychology."

So far neuroscience has failed to bring about the sort of consensus within psychology that has marked the progress of other fields of biology. The neuroscientists V. S. Ramachandran and J. J. Smythies of the University of California at San Diego recently made this point in an essay in Nature:

Anyone interested in the history of ideas would be puzzled by the following striking differences between advances in biology and advances in psychology. The progress of biology has been characterized by landmark discoveries, each of which resulted in a breakthrough in understanding -- the discoveries of cells, Mendel's laws of heredity, chromosomes, mutations, DNA and the genetic code. Psychology, on the other hand, has been characterized by an embarrassingly long sequence of "theories," each really nothing more than a passing fad that rarely outlived the person who proposed it.

One psychological fad, or "theory," that has outlived its inventor is psychoanalysis. Although psychoanalysis has become in certain scientific circles the epitome of pseudoscience, some leading neuroscientists still find Freud's ideas compelling. Susan Greenfield of the University of Oxford is the director of England's Royal Institution and one of England's most prominent neuroscientists. "One of the reasons I admire Freud, above and beyond, perhaps, his specific theories, was that he was a pioneer," Greenfield told a British journalist in 1997. "I am quite unusual, perhaps, as a neuroscientist in finding Freud inspirational."

Actually, Greenfield's affinity for Freud is shared by Floyd Bloom, chairman of the Department of Neuropharmacology at the Scripps Research Institute, author of several books on neuroscience, and editor in chief of the journal Science. When I asked Bloom whether he thought neuroscience might one day validate psychoanalysis, he replied, "I don't disagree with that." Twenty years earlier, he told me, he became convinced that neuroscience might provide insights into the abrupt shifts in perspective, or "intellectual gear shifting," that can occur during psychoanalysis. Bloom considered joining a psychoanalytic institute to gather material for his project; he decided against the move only because a sudden advance in molecular biology, which made it possible to mass-produce genes, lured him back to his laboratory.

Another high-profile Freudophile is Gerald Edelman, who won a Nobel prize for his work in immunology, switched later to neuroscience, and now directs the Neurosciences Institute in La Jolla, California. Edelman dedicated Bright Air, Brilliant Fire, a popular account of his theory of the mind, to "two intellectual pioneers, Charles Darwin and Sigmund Freud. In much wisdom, much sadness." Edelman remarked in a chapter on the unconscious:

My late friend, the molecular biologist Jacques Monod, used to argue vehemently with me about Freud, insisting that he was unscientific and quite possibly a charlatan. I took the side that, while perhaps not a scientist in our sense, Freud was a great intellectual pioneer, particularly in his views on the unconscious and its role in behavior. Monod, of stern Huguenot stock, replied, "I am entirely aware of my motives and entirely responsible for my actions. They are all conscious." In exasperation I once said, "Jacques, let's put it this way. Everything Freud said applies to me and none of it to you." He replied, "Exactly, my dear fellow."

Psychoanalysis and Sea Snails

Equally enamored of Freud is Eric Kandel, director of the center for neurobiology and behavior at Columbia University. Kandel has dominated neuroscience for decades through a combination of brilliance and bullying. He is a coauthor of two leading neuroscience textbooks, Principles of Neural Science and Essentials of Neural Science and Behavior, and he has also exerted an influence on popular accounts of neuroscience. When displeased with the coverage of neuroscience in the New York Times, Scientific American, or elsewhere, he is known to call editors and reporters to complain and suggest how the coverage can be improved.

Born in Vienna, Kandel was trained in psychiatry at New York University and Harvard, but by the early 1960s he had turned exclusively to neuroscience. He decided to study the nervous system not of Homo sapiens but of Aplysia californica, a sea snail that has been described as a "purplish-green baked potato with ears." The creature's nerve cells are the largest known to science; they can be seen by the unaided human eye. It was a perfect laboratory for Kandel's investigations into the molecular basis of memory and learning. When sprayed in a certain spot with a jet of water, Aplysia jerks back inside a mantle. When touched repeatedly, however, it withdraws more lackadaisically and finally disregards the stimulus entirely. Through this process, called habituation, the sea snail learns not to associate the jet of water with harm.

Kandel and his colleagues produced the opposite of habituation -- an effect called sensitization -- by repeatedly spraying Aplysia while giving it an electric shock. The animal quickly learns to withdraw at even the slightest touch. Kandel's group showed that both habituation and sensitization produce molecular changes in the neurons controlling Aplysia's withdrawal reflex. In the case of habituation, neurons discharged fewer neurotransmitting molecules into the synapses connecting them to adjoining neurons; sensitized neurons, conversely, discharged more neurotransmitter. These experiments provided evidence for a proposal, first advanced in the 1950s by Donald Hebb, that learning varies the strength of the connections between neurons. This Hebbian mechanism serves as the basis for an artificial intelligence model called neural networks (which I discuss in Chapter 7).

In the 1990s Kandel and his colleagues performed experiments on what has been hailed as a potential "e = mc2 of the mind," a protein that apparently serves as a master switch in the formation of memory. Together with other groups, Kandel's team showed that the CREB protein helps transform short-term memories into long-term ones in Aplysia; when the protein is chemically neutralized, Aplysia cannot form the long-term memories characteristic of sensitization or habituation. (CREB stands for cyclic AMP-response element binding.) Other researchers have performed similar experiments in fruit flies, mice, and other organisms.

At an age when most scientists are content to leave the field to younger colleagues, Kandel has remained very much in the fray. An article on memory research in the New York Times Magazine in February 1998 featured a full-page photograph of Kandel wearing a blue-striped shirt and red bowtie and gripping a slime-glazed Aplysia. The article noted that Kandel had "pioneered much of the research into the molecular basis of memory" and remained at the forefront of his field. He was trying to parlay his scientific achievements into commercial success by forming a company called Memory Pharmaceuticals, which markets drugs that allegedly decelerate, stop, or even reverse memory loss.

The article mentioned that Kandel had been interested in psychoanalysis before turning to neuroscience. What the article did not mention is that Kandel had undergone psychoanalysis early in his career and even considered becoming an analyst. Although he was "fruitfully distracted by neurobiology" (as the Times put it), he never stopped believing in the theoretical and therapeutic potential of psychoanalysis. He retained the hope that Freud's theories about the mind will one day be substantiated by neuroscience.

Kandel spelled out this hope in "A New Intellectual Framework for Psychiatry," published in American Journal of Psychiatry in April 1998. He noted that his experiments and others had shown that experience produces physical changes in neurons. More specifically, habituation or sensitization of neurons can turn genes on or off and otherwise affect their expression. These findings implied that experience, such as traumatic events in childhood, could cause neurosis through both neurochemical and genetic effects. In the same way, psychoanalysis and other psychotherapies might produce long-term beneficial effects with a genetic basis. "As a result of advances in neural science in the last several years," Kandel proclaimed, "both psychiatry and neural science are in a new and better position for a rapprochement, a rapprochement that would allow the insights of the psychoanalytic perspective to inform the search for a deeper understanding of the biological basis of behavior."

I met Kandel in the fall of 1997 in his office on the sixth floor of the Psychiatric Institute in Manhattan. His office overlooks the Hudson River, and as we shook hands the sun was descending, blood red, behind the New Jersey skyline. Like other neuroscientists whom I had interviewed, Kandel oscillated between pride and humility as he reviewed his field's performance. When I asked if he thought memory was on the verge of becoming a "solved" problem, Kandel grimaced and shook his head. He noted that the great neuroscientist Ramon y Cajal had once said that problems are never exhausted; only scientists are.

It is possible, Kandel elaborated, that the CREB protein and other findings could reveal the common basis for many different types of memory, just as the discovery of DNA's structure had provided a unified vision of heredity. But memory is "far from solved," Kandel emphasized. Researchers must still determine how the different regions of the brain contribute to the encoding, consolidation, storage, and recall of a memory. "We don't know a goddamn thing about any of those things."

Most scientific fields, he mused, alternate between periods during which they become more complex and periods during which they become more unified. "We are now in an age of splitting," he said. He had had to revise his classic textbook, Principles of Neural Science, three times since it was first published in 1981 to accommodate the deluge of new findings. "The easy problems have been solved," Kandel said. "We are now confronting those that are most difficult."

A central problem for neuroscience, he remarked, was learning how the brain constructs pictures of the world from many disparate pieces. The brain does not mirror the world the way a camera does, Kandel emphasized; "it decomposes the image, it decomposes all sensation, and then reconstructs it." Research done by Patricia Goldman-Rakic and others on live primates could yield clues about how the brain creates its picture of reality. "I think that's a very effective methodology," Kandel said. But like Torsten Wiesel and Gerald Fischbach, Kandel emphasized that the binding problem -- the Humpty Dumpty dilemma, to use my term -- remains very much unsolved.

When he first became a neuroscientist, Kandel recalled, he thought there would be a "rapid merger" between neuroscience and psychiatry. Obviously that synthesis has not occurred. Kandel said that psychoanalysts, who dominated psychiatry in the 1950s and 1960s, were partly to blame for this lack of progress. "Psychoanalysis went through a phase in which it was so confident of its effectiveness that it expanded its interests to all areas of psychiatric disease and all areas of medicine. That was part of its downfall. Insofar as it works, it probably only works in a limited set of circumstances." Psychoanalysts had also been "deliquent" in not questioning their own methods and putting them to the test, Kandel said.

Many of Freud's larger ideas -- such as his assertion that childhood conflicts shape character and that much of our mental life occurs below the level of awareness -- are now seen as "obvious," Kandel said. "I think everyone sort of accepts that." But questions remain about more specific aspects of Freudian theory, such as the precise manner in which childhood experiences give rise to various personality traits and disorders. "Do these hold empirically and under what circumstances? Are they universal? And even more importantly, does psychoanalysis work and under what circumstances?"

Kandel had a "gut feeling" that psychoanalysis works -- his own analysis had made him a better person, he assured me -- but proving that it works is another matter. Research could show that psychotherapy produces beneficial changes in the brain that "are as specific -- maybe more specific! -- than drugs. That would be quite wonderful." After all, if talking to a friend or pastor or therapist produces changes in the brain, as it must, "why should that be of any less value than using Prozac, right?"

Even if research cannot demonstrate that psychoanalysis works, it will remain a "very humane, rich perspective on the human mind." Early in this century psychoanalysis served as a much-needed countermeasure to the excesses of behaviorism, which offered a "very shallow" picture of mental representation. Psychoanalysis also anticipated the discovery of modern neuroscience and cognitive psychology that the brain constructs reality rather than simply mirroring it. "So at the worst [psychoanalysis] can give us a weltanschauung which is quite rich. At best it may actually turn out to be a therapy which has real utility."

It is possible, Kandel said, that the effectiveness of psychoanalysis may stem from the expectations of patients -- in other words, from the placebo effect. "Maybe psychoanalysis is simply a very effective way of recruiting a patient's trust for therapy purposes," he said. "One would hope that there's more, but that could be all of it." Kandel brushed aside the suggestion that such a finding would place psychoanalysis on the same level as faith healing. Faith healers, he asserted, are much more likely to be charlatans and frauds than analysts, psychiatrists, and others closer to the scientific mainstream. "That's not to say that among well-trained physicians you won't find a charlatan, but the statistical probability is much reduced."

Freud as Neuroscientist

Ironically, Freud himself toward the end of his career seemed to doubt whether neuroscience would provide deep insights into the human psyche. Before creating psychoanalysis, Freud spent more than a decade performing research that would now be characterized as neuroscience. He studied the nervous system of lampreys and crayfish, and from 1882 to 1885 he worked closely at the Vienna General Hospital with brain-damaged patients. He published more than three hundred papers and five books on neurobiology, including a monograph on aphasia and other conditions resulting from neural trauma.

In 1895 Freud briefly became convinced that the human psyche and its disorders could be understood in terms of purely physiological phenomena, such as the newly discovered neurons. He wrote to his friend Wilhelm Fliess:

One evening last week when I was hard at work...the barriers were suddenly lifted, the veil drawn aside, and I had a clear vision from the details of the neuroses to the conditions that make consciousness possible. Everything seemed to connect up, the whole worked well together, and one had the impression that the Thing was now really a machine and would soon go by itself.

That same year Freud sketched out his vision of a physiologically grounded theory of the mind in a manuscript that later came to be called Project for a Scientific Psychology:

The intention is to furnish a psychology that shall be a natural science: that is, to represent psychical processes as quantitatively determinate states of specifiable material particles, thus making those processes perspicuous and free from contradiction. Two principal ideas are involved: 1, What distinguishes activity from rest is to be regarded as Q, subject to the general laws of motion. 2, The neurons are to be taken as the material particles.

Freud never published his manuscript, and just a few months later he wrote to a colleague: "I no longer understand the state of mind in which I hatched out the Psychology." Immediately after this period he began constructing a purely psychological model of the mind, psychoanalysis. Over the course of his career, Freud became increasingly skeptical about whether the mind and its disorders could be explained in physiological terms. In 1940, just before his death, he seemed to rule out the possibility that psychology would ever be united with neuroscience:

We know two things about what we call our psyche (or mental life): firstly, its bodily organs and scene of action, the brain (or nervous system) and, on the other hand, our acts of consciousness, which are immediate data and cannot be further explained by any sort of description. Everything that lies in between is unknown to us, and the data do not include any direct relation between these two terminal points of our knowledge. If it existed, it would at the most afford an exact localization of the processes of consciousness and would give us no further help toward understanding them.

Like Socrates more than two thousand years before him, Freud seemed to be suggesting that the explanatory gap might never be closed. His premonition has been borne out so far by the inability of neuroscience either to confirm or to falsify Freud's own theories.

Copyright © 1999 by John Horgan

About The Author

Product Details

  • Publisher: Free Press (November 14, 2000)
  • Length: 336 pages
  • ISBN13: 9780684865782

Browse Related Books

Raves and Reviews

Jim Holt The Wall Street Journal The Undiscovered Mind is full of fascinating vignettes in which noted brain researchers are caught thinking out loud....Riveting [and] eye-opening.

Walter A. Brown Clinical Professor of Psychiatry, Brown University School of Medicine and Tufts University School of Medicine John Horgan has done it again. In this rich, irreverent, thorough, and entertaining tour of mind-science, Horgan makes complicated lines of research accessible and compelling.

Abraham Verghese Chicago Tribune Compelling....The Undiscovered Mind is a well-researched and important book.

Resources and Downloads

High Resolution Images