What Is a Thought???

A Biodigital Hypothesis

If you ever asked a cardiologist, “Physiologically speaking, what is a heartbeat?”—they could bore you to tears with details on anatomy, electrophysiology, hemodynamics, innervation, pulmonary and peripheral circulation.  But if you ask a psychiatrist this entirely pertinent question– “Physiologically speaking, what is a thought?”—the only honest answer would be, “We have no freaking clue.” Because we don’t.

That’s the most relevant measure of psychiatry’s scientific knowledge that I can think of. We don’t know how the brain-mind executes any of the higher functions that are the actual focus of psychiatry, the generation of thought and behavior—and so the bulk of our psychiatric research to date has been focused on finding medications that cause desired effects in the brain-mind, when we really don’t know anything about how its core functions manifest themselves.

I began psychiatric training in 1981, when the reign of psychoanalysis was yielding to the current biological wave of psychiatry. The promise of that movement is now fading—in the face of complicated questions about efficacy, corruption of our science, and iatrogenic harm done to patients by our medications, and even our diagnostic labels. Psychiatry is now under sustained attack, and for the most part its response has been a defensive crouch—fighting to maintain an illusion of mastery, when in fact what we are treating is a mystery.

You might be distracted by my references to the “brain-mind.” It’s not an affectation, but rather an acknowledgment of plain truth. The biological model of psychiatry that’s prevailed for the last several decades has prospered by focusing on the brain as an anatomical entity, while neglecting the obvious—that thought exists, and has a more than considerable influence on the disorders of perception and behavior that we identify as psychiatric illness. My composition of this article can’t be accounted for by mere shifts in my balance of norepinephrine, serotonin, acetylcholine, GABA, and dopamine. Something much more subtle and marvelous must be in play. But because we don’t know what it is, biological psychiatry ignores it—vainly neglecting the enigma at the core of our profession, to the point of rank denial. So, unless I’m discussing anatomy or neurophysiology, I use the term “brain-mind” to acknowledge this ignorance, and thus maintain an appropriate level of humility and wonder.

Our ignorance of the physiology of cognition is of course understandable. The human brain is fabulously complex, composed of about 100 billion neurons and entirely encased in bone. Physical intrusion into the brain is highly likely to cause irreversible damage, with significant risk of lethal infection. Using contemporary techniques to unlock these secrets would be grossly unethical, and probably not that fruitful anyway—since every brain is unique. Ironically, our mind may be one of the last frontiers of human understanding.

But just because we can’t do hands-on research to establish the physiology of thought, that doesn’t mean we can’t apply some rational speculation to better conceive of and understand the nature of the mind. Physicians in other specialties turn to mechanical models to help them conceptualize and study the function of their organ system.  Cardiologists study the heart using computational models derived for mechanical pumps, and nephrologists use models based on mechanical filters. So, if you were going to study the functions of the brain, what kind of technology would you use as a model?

Well, a computer, of course. In my research on the subject I’ve found references contesting this analogy on rather irrelevant fine points. Such models aren’t exhaustive, but they do help us to conceptualize and study the organ systems in question. The brain takes in information—processes it—and then uses it to exert control over our body, and to act upon our environment. And every time we have some sort of product and we want to update it with a similar set of capabilities, we stick a data processor on that product and then market it as “smart”—like a brain, right?

Conceptually speaking, the defining characteristic of a computer is its marvelous ability to use information to construct virtual machines—programs, or “apps”—that are in turn used to process other information. This technological ability is of course a recent development in human history—but it seems clear that the capacity of information to be used to process other information wasn’t invented, but rather discovered by man. Do you think that this capacity was just laying around waiting to be discovered by us? Or isn’t it quite possible, or even likely, that this capacity is already being exploited by all sentient life forms—and perhaps some non-sentient life forms as well? The likelihood that this emerged in the process of evolution, to be utilized in nature, is no more amazing than all the other wonders of biology we find. If any process or niche exists, life always seems to find a way to exploit it. And there’s simply no better way to account for the brain’s myriad functions than to assume that it might be utilizing this capacity as well.

At its most basic level, a computer consists of relatively few pieces of specialized hardware with a lot of redundant structure, constructed to contain and utilize a complex architecture which is constructed entirely of information. This information is stored as huge arrays of binary code—each unit of which is commonly represented as a choice of either “0” or “1”, and referred to as a bit. This data is coded and preserved as complex patterns within the storage media of the computer, where it can be utilized by the hardware to execute any manner of complex tasks. Just consider the variety of functions that can be performed by these constructs of data on your cell phone—music composition, financial management, videography, communications, etc.—all working within the architecture of the operating system, likewise constructed entirely of information. All this information can be transferred through and to a variety of media—wire lines, optical lines, magnetic tape, radio waves, magnetic discs, optical discs—even punch cards if you’ve got the time and storage space. The medium used to transfer or store the information is irrelevant—other than how efficiently, conveniently, and reliably it does so. No matter what medium you’re using, the information is the same. So why couldn’t such information be stored in a system of flesh and blood?

Structures analogous to the components of a computer can be roughly identified within the brain. There is the organic “hardware” of the brain at large. The “firmware” consists of the structures of the lower brain that manage noncognitive functions, such as coordination of motor function, processing sensation and expression, maintaining homeostasis, and processing the storage of memory—the basic functions of the brain that we are born with, allowing us to breathe, feed, and otherwise interact with the environment. Our brain-mind has peripherals as well—our eyes, ears, hands, speech, and all other organs of sensation and expression.

But the most mysterious component of our brain-mind is the “software” wonder of cognitive processing in the cerebral cortex. Imagine the millions of molecular shifts that must occur there for each moment that we spend pondering our problems, contemplating our future, recollecting our past pains and pleasures, reading a poem aloud, or enjoying a good film. I’m an agnostic myself—but if there is in fact a divine spark within us, it rests here in the middle of this miracle. It’s a miracle that goes on and on….one that includes me at this moment writing this article, and you at your moment of reading it. It’s the mind, the seat of our consciousness—the domain of thoughts, hopes, worries, dreams, and to-do lists. It’s where we make decisions on what to eat, whom to marry, what job to pursue, when to go to bed, and whether we’re going to try an antidepressant.

But there’s no space for this miracle, no place for the mind, in the biological model that dominates psychiatry today. Because it’s not deemed relevant to the selection of a medication. Because it’s messy, inconvenient, and time-consuming to deal with it. But most of all, because we have no idea as to how it all works.

At this time, our entire biological understanding of the “software” of cognition is limited to the rough equivalent of a “bit” of memory—the molecular alteration of messenger RNA during the acquisition of memory in the hippocampus. This real-time video, made by researchers at Einstein University in 2014, demonstrates fluorescently labeled beta-actin messenger RNA traveling from the nucleus through the dendrites of a brain cell in the hippocampus of a mouse, in response to stimulation with light. This confirms the hypothesized role of such mRNA in memory storage.

This involvement of mRNA in the storage of memory supports the hypothesis that thought could be based on the sort of digital processing we see in computers. Note that mRNA’s primary biological function is the transmission and translation of the genetic blueprint of DNA—which holds the entire database of life in a code constructed of four discrete nucleotides. This code is quaternary, instead of binary like the code used by our computers. A quaternary system of processing is entirely possible; binary is simply our industry standard. So isn’t it logical to assume that the same digital information system that life relies upon already for reproduction could be evolutionarily adapted for processing cognition?

I know perfectly well that this hypothesis is highly speculative, and proof thereof is far beyond my capabilities. But the principles of science are not based on the assumption of knowledge—they’re based on the assumption of doubt. And the reason I’m advancing this model is not to presume hard scientific knowledge, but rather to promote hard scientific skepticism, to be applied to the woefully inadequate assumptions that drive our current treatment model.

We are living in an era where the remedies most available to our patients are crude biological interventions that neglect the full nature of psychiatric disorders. They produce their effects by activating chemical messengers in rather primitive corners of the lower brain, bypassing any regard for the role of cognition in psychopathology. The limits of their benefits became increasingly obvious upon release of the CDC’s study on suicide in June 2018 . This study documented a 30% increase in the U.S. suicide rate from 2000 to 2016—right here in this Age of Prozac, with more people receiving diagnoses and treatment for psychiatric disorders than ever before.

Our personal software system—all our cumulative thoughts, memories, and feelings, our cognitive identity—has a name. It’s the psyche, defined as “the human soul, mind, or spirit.” There once was a time when psychiatry used to treat it. But instead of being appropriately humbled by this yawning void in our understanding of the brain-mind, contemporary psychiatry has instead chosen to fall in love with the relatively modest gains in biological knowledge that we’ve seen over the past few decades. This infatuation has been fueled by numerous financial interests. It’s been hyped by high hopes, scientific indiscipline, and misinformation that greatly exaggerates our understanding of what we do.

If you have a computer with contaminated data or some other software problem, there are interventions to consider. They might include downloading a software patch from a website, removing a virus, reformatting and reloading a hard disc, or reinstalling the operating system.  In short, we need to modify the information on the computer. So how can we modify the software of our brain-mind?

Well, external modification of data in a computer requires the input of new data by using a peripheral—such as a keyboard, DVD drive, or modem. Our brain mind can likewise use our organs of sensation to obtain information from the environment. Have you ever read a book, or had a conversation, that had a profound effect on how you perceive your life? Have you ever seen a movie that changed how you felt about its subject matter? Do certain songs trigger certain thoughts, feelings, or memories? All these examples reveal the psychic power of external information—most clearly demonstrated by the onslaught of negative information that floods the sensations of someone experiencing a major traumatic event—enough information, sadly, to change one’s mind, and life, forever.

A well-established term for “software” intervention of the brain-mind already exists. It’s “psychotherapy”. I’m not convinced that passive and “neutral” models of psychotherapy are the most efficient remedy—sessions in the vein of life-coaching might be more efficient and more available, teaching mature coping strategies to patients who may not have had good “programming” in their upbringing. Such a model could be made more available to the rural areas, where suicide is more prevalent. This is an especially urgent need because findings of the CDC’s suicide study suggest that much of suicide occurs not because something happens to your brain, but because something happens to your life. It would be interesting to see just how much we could improve the efficiency and efficacy of psychotherapy, if we devoted only a portion of the financial resources that we now spend developing psychiatric drugs for the marketplace.

It’s often said that if the only tool you have is a hammer, then everything looks like a nail. I can’t think of a more penetrating illustration of this adage than contemporary psychiatric practice. Because of our ignorance of cognitive physiology, we hide behind the biological model of psychiatry—where the products are easily marketed and sold, requiring little investment of time or effort from either party. We avoid the obvious fact that psychiatric disorders are inherently eclectic disorders that engage both the brain and the mind. And since the mind appears to be even more individuated than the brain is, and perhaps more complicated as well, effective treatment would certainly require a wider array of interventions than we have now.

If we simply acknowledge that the stuff of thought could be created from something as abstract as information, then perhaps we can see the folly in what we are expecting from our biological interventions. If you had a software problem with your laptop and brought it in for repair, and the technician insisted it was a hardware problem and offered you a hardware solution that didn’t work, you wouldn’t be a satisfied customer—and he/she wouldn’t be a great technician.

There is wisdom in ignorance, in knowing what you don’t know. Our profession needs more of that kind of wisdom nowadays.

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved