Psychiatry’s Next Revolution: A New Way to Think About Thought

Share on facebook
Share on twitter
Share on linkedin

Psychiatry's Next Revolution:
A New Way to Think About Thought

“Too far-fetched to believe, too obvious to ignore.” 

–Nicholas Sparks

If you ever asked a cardiologist, “Physiologically speaking, what is a heartbeat?”—they could bore you to tears with details on anatomy, electrophysiology, hemodynamics, innervation, pulmonary and peripheral circulation.  But if you ever asked a psychiatrist this entirely pertinent question–“Physiologically speaking, what is a thought?”—the truth is we’ve been without a clue. Until now.

Of all psychiatry’s mysteries, this has been the crucial void in our scientific knowledge–how the brain-mind executes any of the higher functions that are the actual focus of psychiatric practice: the generation of thought and behavior. This gap in our understanding of essential physiology has been singularly responsible for psychiatry’s own split personality.

Brain vs. Mind

In 1808 a German medical professor named Johann Riel used Greek roots to coin the term “psychiatry,” which means “healing of the psyche.” The psyche itself is defined as “the human mind, soul, or spirit,” with no reference at all to the brain. Riel probably did so because he lived in a time when we had some grasp of the mind, but almost nothing was known about the functions of the brain. To this day psychiatry is defined as “a branch of medicine that is focused on mental, emotional, and behavioral disorders.” “Mental” is an adjective referring specifically to the mind–which in turn is defined in the Oxford dictionary as “the element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought.” So even in psychiatry’s current etymology, references to the brain are curiously absent. However, our profession spent nearly two centuries oscillating back and forth between its two existential poles–the brain, an anatomical entity; and the mind, an abstract entity which happens to occupy the same space–without any definitive scientific understanding of either. This ambiguity made our profession ripe for corruption and folly–with results that included Emil Kraepelin’s biologically-oriented embrace of “degeneration theory,” which inspired the rise of fascism in the early 20th century; and the psychological invention of “penis envy”–Sigmund Freud’s disdainful alibi for the torments of patriarchy. 

The psychoanalytic model that dominated during my youth was thoroughly entranced with its own ideas, projecting structural models that were independent of any grounded physiology. But an explosion of neurophysiological progress in the late 20th century, along with a convergence of financial interests, led to modern psychiatry’s heightened fixation on the brain. The intellectual excesses of psychoanalysis gave way to a new treatment model that began by likening depression to diabetes, and introducing America to the concept of a “chemical imbalance.” Although psychiatry has since disowned these tropes, it still seems to perceive our wondrous brain as little more than an extremely complicated gland–neglecting the mind as if it no longer existed.

This ploy has been fabulously successful in indoctrinating the general public with the idea of mood as a chemical aberration. An amazing number of my patients seen in evaluation tell me that they are depressed “for no reason”–and then have to be walked through the entirety of their psychosocial history in order to be reminded what is actually pissing them off and/or bringing them down. It’s as if people have utterly forgotten that they might have something to learn from their feelings—which are instead perceived as a bunch of nuisance symptoms that need to be blotted out. They’re the fulfillment of every biological psychiatrist’s dream–a generation of patients who think they have nothing to talk about, and just want their pill.

An Obvious Model Is Ignored

A couple of years ago I released a website post and podcast proposing a computer model of the brain-mind–in the manner of the mechanical models that are used to study the physiology of other organs, such as using a pump model to study the heart. Such models are not exhaustive, but they do help us to conceptualize and study the organ systems in question. Like a computer, the brain takes in information, processes it—and then uses it to exert control over our body, and our environment. And every time we have some sort of consumer product we want to update with a similar set of capabilities, we stick a data processor on it, and then market it as “smart”—like a brain, right?

I’ve found that if I ask my peers what they think might constitute a thought, the usual answer is something like: “I don’t know, maybe a bunch of synaptic connections.” Because modern psychiatry is all about synaptic connections. Nearly every medication we use works at synaptic connections. So in the eyes of modern psychiatry, this simply must be where the action is–with little apparent regard for what might be going on inside the cell body of the neuron. That would explain why for over seventy years our prevailing model for memory and learning continues to be one that was proposed in 1949, by Donald Hebb: synaptic plasticity. This hypothesis maintained that learning is the result of long-lasting chemical and/or physical changes in the synaptic connection of neurons, occurring in response to repeated and persistent stimulation by the presynaptic cell. This process is often depicted by Hebb’s own clever axiom: “Cells that fire together wire together.” He proposed that a number of  these fortified connections assemble to form an engram–any hypothetical permanent change in the brain that would account for the existence of a memory. A single memory might engage a network of several such engrams dispersed to different areas of the brain, referred to as an engram complex.

This all sounds to me like something akin to an analog computer–which doesn’t use digital processing, and became obsolete around 1960. In a 2010 issue of Scientific American, cognitive neuroscientist Paul Reber estimated that the human brain has an memory capacity of about 2.5 petabytes–which is 2,500 terabytes, or 2.5 million gigabytes.1 It’s hard to believe that Hebb’s model–concocted at a time when digital processing was unknown–could account for so much computing power. Fortunately, there is more recent evidence that suggests our brain-mind may be much more up-to-date than that–having already completed its own digital revolution.

Nucleic Acids Have Digital Powers

All the components of a modern computer are classified as either structural hardware, or the information-based construction of software–a relationship that neatly mimics that of the brain and the mind. What first triggered me to consider the possibility that cognition might be digitally based was this video posted on YouTube by the Albert Einstein College of Medicine in 2014.2 The video shows fluorescently tagged beta-actin messenger RNA traveling through the dendrites of a neuron, in the hippocampus of a mouse upon exposure to light. This video intrigued me because both RNA and DNA are commonly regarded as a form of digital memory storage–utilizing a quaternary code of nucleotides, rather than the binary code which is our industry standard. This in turn led me to consider that such digital memory might be used to execute digital processing as well–within an elaborate software construction of nucleic acids, one that we call “a mind.” 

Since my last podcast I’ve found an impressive amount of information to support this neurodigital hypothesis for the nature of thought. One is an eye-opening Wikipedia entry including 54 citations on the subject of “DNA computing.”3 There you will discover “an emerging branch of computing which uses DNA, biochemistry, and molecular biology hardware, instead of traditional electronic computing”–and then get your mind blown by the fact that the capacity of nucleic acids to perform computation was first demonstrated in 1994! It goes on to note that DNA processing speed is slow relative to our computers, but compensated by its potential to make multiple parallel computations at the same time. 

But that’s nothing compared to the hot topic of “DNA digital data storage”–which generates 82,600 results on Google, including a multitude of companies that are already in business.4 The primary attraction is information density. It’s estimated that one gram of DNA could possibly store up to 455 billion gigabytes of information, and that the entirety of humanity’s data could thus be stored in one room. Other advantages include the rapid transportation of huge amounts of data, and vastly increased stability of data over long time periods. It’s an expensive process, converting all those 1’s and 0’s to A’s, G’s, C’s, and T’s–but its financial promise is attracting an overwhelming amount of speculation

Learned Behaviors Can Be Transplanted Using RNA

These novel applications of DNA clearly demonstrate the capacity of nucleic acids to store and process digital data, which should intrigue us all–but they don’t prove that they are being utilized in this manner by living organisms, or in our own brain. But in May 2018, eNeuro published the results of a UCLA study headed by David Glanzman that succeeded in using RNA to transfer learned behavior from one animal to another.5

They used Aplysia californica, a species of sea slug, for this study, because they have the largest neurons found in nature—up to a millimeter long. When tapped by a researcher, these slugs typically exhibited a defensive withdrawal reflex lasting about one second. The experimental design called for such taps to be followed by the administration of mild electric shocks to the slugs’ tails to prolong their response–each receiving a series of five shocks that were administered 20 minutes apart on one day, with another such series administered 24 hours later. This training resulted in observable sensitization of the snails–manifested by increased intensity and length of their defensive withdrawal reflex when given ordinary taps. These prolonged contractions lasted an average of 50 seconds, rather than the typical one second response.

After the second day of shocks, the researchers extracted RNA from the nervous systems of these sensitized slugs. Seven untrained slugs received an injection of this sensitized RNA, and a control group of seven untrained slugs was injected with RNA from other untrained slugs. All seven of the slugs that received the sensitized RNA exhibited the prolonged response seen in the trained slugs, lasting an average of 40 seconds in this group. All the controls injected with unsensitized RNA exhibited no such prolongation of response. The researchers also did in vitro studies of motor and sensory neurons obtained from both sensitized and unsensitized slugs, discovering that the sensitized RNA produced increased excitability in sensory neurons, but not motor neurons.

...and Synaptic Plasticity is Dethroned

Glanzman has pointed out that these findings negate the longstanding model of synaptic plasticity, noting that, “If memories were stored at synapses, there is no way our experiment would have worked.”6 It is his belief that memories are instead stored in the nuclei of neurons. In this study, Glanzman managed to firmly elevate RNA’s prospective role as that long sought, fabled engram–the fixed change in the brain that accounts for the existence of memory.

New Pieces of the Puzzle

Equally exciting was a review article that appeared in The Neuroscientist in October 2020entitled “Circular RNAs in the Brain: A Possible Role in Memory?7 The article cites a large body of recent evidence supporting a critical role for regulatory RNAs in coordinating “experience-dependent neural plasticity” (or as we call it, “learning”)–and focuses on the contribution of a structurally distinct class of RNAs known as circular RNAs, or circRNAs. CircRNAs were first visualized by electron microscopy in 1979, in the cytoplasm of HeLa cells–an immortal, surprisingly durable cell line that has been used extensively in scientific research since 1951, when they were harvested without consent from the cervical cancer cells of an African-American woman named Henrietta Lacks.

CircRNAs were originally thought to be artefacts of splicing, or rare oddities derived from only a few genes. However thousands of circRNAs have since been identified, and they are particularly enriched in brain tissue. These closed loop, single-stranded RNA molecules are highly stable and long-lived compared to other forms of RNA, due to their structural resistance to exonuclease-mediated degradation. Their circular structure is similar to viroids–virus-like RNA pathogens that are able to infect flowering plants without the protection of a virus’ protein coat–and plasmids, the autonomous circular DNAs most familiar to us by their reproductive role in our own mitochondria. Given their structural stability, circRNAs could potentially serve as “memory molecules” transferring packets of information between brain cells. Current evidence suggests that this class of RNAs contributes greatly to higher-order functions such as cognition and memory. The chart below provides a prospective model for how circRNAs may contribute to the incorporation and preservation of longterm memory:

Conjectured role of circRNAs in learning and memory formation

(A) General model of how memories are formed, stored, and retrieved.

(B) During learning, two peaks of gene transcription contribute toward the formation of memory. However, linear RNAs and their actions are relatively short-lived. Given their long life span, there is the possibility that circRNAs may be necessary for the stability, and hence continuity, of cellular behavior that underlies memory. Thus, we propose that circRNAs act as a mechanism to keep track of the history of experiences (i.e., alterations in cellular behavior) that a cell, or network of cells, undergoes.

A point is made in the article that approximately 98% of the output from the human genome does not code for proteins, and is thus classified as “non-coding.” Such DNA is responsible for the production of non-coding RNAs (or ncRNAs), which for many years have been dismissively referred to as “junk RNA.” However they have more recently been recognized as performing regulatory roles critical in the direction of brain function and behavior. It is also noted that the percentage of non-coding RNAs within the genome of an organism increases in proportion to its evolutionary advancement and organismal complexity–while the number of protein-coding genes remains about the same. This implies that higher-order cognition may be a product of this abundance of regulatory architecture in our genome, which in turn results in an abundance of circRNAs. The article goes on to explore evidence suggesting that dysregulation of circRNAs might possibly contribute to the pathophysiologies of Alzheimer’s disease, depression, schizophrenia, and autistic spectrum disorder. The authors go on to conclude:

Previously, the role of RNA as a regulatory architect of cellular behavior was constrained by its short longevity…. However, with the discovery of circRNAs, our understanding of how molecular networks function and communicate with each other, both intracellularly and intercellularly, may soon be revised….Given their relatively recent discovery, there is still a lot to uncover about the regulation and function of circRNAs and their involvement in cognition.

Could This Be the Stuff of Thought?

I will go out on a limb here, and propose that this “non-coding DNA” and “junk RNA” might constitute the “operating system” of the nascent human brain–establishing the data framework that the infant uses for its limited operations after birth, and upon which its intellectual growth is subsequently imposed. This would likely include the firmware for maintenance of homeostasis, our sensory “peripherals” (vision, hearing, smell, taste, touch), and our emergent motor activities (suckling, smiling, grasping, etc.). It could explain the universal vocabulary of expressions shared by humanity, and seen even in neonates–like smiling, cooing, grimacing, crying, and laughing.  It could even vindicate Carl Jung’s proposition that all humanity has a shared collective unconscious, a segment of the mind that is genetically inherited, and not shaped by human experience. The discovery of such a gene has the potential to reconcile psychiatry’s historic Great Divide. All this new evidence, of course, stops far short of proving my little pet hypothesis–but it damn sure moves our working model of memory and cognition firmly into the digital age of contemplation.

Some of you might still be thinking that this is altogether too amazing a proposition to consider. But it doesn’t seem to me any more amazing than the fact that my smartphone does all the things it does simply by manipulating a whole lot of 1’s and 0’s. Or the fact that primordial plants somehow, over the course of millions of years of evolutionary trial and error, managed to formulate chlorophyll–endowing them with the capacity to convert carbon dioxide, water, and sunlight into sugar. We live in a world overflowing with scientific wonders, which is in turn but a small part of a universe that is likewise full of scientific wonders. 

To convince me that digital processing can’t be happening in our brains, you now have to convince me that life has been sitting on a digital storage system that has that capacity, but has never bothered to utilize it–which, as we all know, would be very out of character for life. Now that we know that our nucleic acids can perform these functions, the notion that our brain might utilize a process with which we are already very familiar in our day-to-day life seems rather mundane, or even predictable.

Besides its organic substrate, what most clearly distinguishes the human brain from our computers is our ownership of the executive function. We are in fact the “users” of our own brain-mind–but nowadays we live in fear that our artificial intelligence systems might hijack their own executive function, and thus become “human” enough to present an existential threat to humanity. I think we unconsciously recognize that computers are doing what we do, and naturally fear that they may in fact do it better. (Which might make us think twice about embracing DNA computation.)  Notably, our current biological model fails to physiologically account for our executive function–even though psychiatrists contend with it in nearly every case they treat! (Although they refer to it as “treatment compliance.”)

This model of thought explains our capacity for intellectual exploration, adaptation, invention, and creation–as well as the infinite variety of cultures, personalities, interests, talents, and tastes in the human spectrum. These capabilities arise from our individualism, our will, and our potential for self-awareness—all of which contribute to that x-factor of humanity, the pixie dust that makes our legendary “spark” so divine. This administrative function of the brain-mind is responsible for many of the management difficulties that profoundly complicate the practice of psychiatry—not only in the matter of treatment compliance, but in the nagging questions that arise in legal, ethical, and spiritual realms about the nature of psychiatric illness and treatment.

The Awesome Power of Information

So how can we modify the software of our brain-mind? External modification of data in a computer requires the input of new data by using a peripheral—such as a keyboard, disc drive, or modem. Our brain-mind can likewise use our organs of sensation to obtain information from the environment. Have you ever read a book, or had a conversation, that had a profound effect on how you perceive your life? Have you ever seen a movie that changed how you felt about its subject matter? Do certain songs trigger certain thoughts, feelings, or memories? All of these examples reveal the psychic power of external information.

But the power of information is most clearly demonstrated by the phenomenon of post-traumatic stress disorder–which is an onslaught of negative information that floods the sensations of someone experiencing a major traumatic event. Enough information, sadly, to change one’s mind and life forever. Imagine this grim scenario: A 9 year-old girl witnesses her 4 year-old brother being shot in the head by their stepfather. No physical harm came to her–but the sights, sounds, and smells of this event linger with her for the rest of her life. As usual with PTSD, we can expect her to relive this event on a daily basis, with significant disruption of her emotional and physiological stability. This is the potential impact of information on the function of the brain-mind.

Playing Dumb for Dollars

It’s often said that if the only tool you have is a hammer, then everything looks like a nail. I can’t think of a more penetrating illustration of this adage than contemporary psychiatric practice. Because of our ignorance of cognitive physiology, we hide behind the biological model of psychiatry—where the products are easily marketed, sold, and dispensed, requiring little investment of time or effort from either party. We avoid the obvious fact that psychiatric disorders are occurring in a mysteriously heterogeneous environment–manifesting themselves in both the more familiar brain, and that mysterious mind. And since the mind appears to be at least as complex, and even more personalized than the brain is, effective treatment would certainly require a vastly different array of interventions than we have now.

One of the buzzwords in psychiatry today is treatment resistant depression (or TRD, naturally)—which sounds like a positive development, since it suggests that psychiatrists have finally noticed that some of their patients aren’t getting better on antidepressants. Several times a month I receive spam from CME (continuing medical education) websites touting articles like “Intervention Strategies for Unresponsive Depression”, or “Addressing the Needs of the Treatment Resistant Patient”. Childlike, I open them with the foolish hope that someone will be recommending deeper exploration of marital issues, or instructing psychiatrists how to screen patients for referral to psychotherapy—but sure enough, they’re inevitably recommending augmentation of the certain antidepressant with a certain mood stabilizer, or another diabetogenic antipsychotic agent. But of course it’s exactly what I should expect: “Your current selection of hammers isn’t working? Then try using two new hammers!”

Instead of being appropriately humbled by this yawning void in our understanding of the brain-mind, contemporary psychiatry has instead chosen to fall in love with the relatively modest gains in biological knowledge that we’ve seen over the past few decades. This infatuation has been fueled by numerous financial interests, as well as our own insecurities as a medical specialty. It’s been hyped by high hopes, scientific indiscipline, and misinformation that greatly exaggerates our understanding of what we do. It seems to me likely that this failure to do what other specialties have done—to use the most analogous model as a point of reference, in order to better conceptualize our focal organ system—is no accident, but a calculated evasion of the obvious. Because if you thoughtfully apply the computer model to what we know about brain physiology, it raises some serious questions about the potential benefits that can possibly be gained from medication intervention–nor does it offer many compensatory opportunities for corporate profit.

Finding a Place for Thought

I know perfectly well that this neurodigital hypothesis is highly speculative, and proof thereof is far beyond my capabilities. But the principles of science are not based on the assumption of knowledge—they’re based on the assumption of doubt. And the reason I’m advancing this model is not to presume hard scientific knowledge, but rather to encourage hard scientific skepticism toward the woefully inadequate assumptions that drive our current treatment model. The computer is the best available model we have, if for no other reason than that it clarifies the relationship of the mind to the brain. In doing so, it calls attention to the egregious deficiencies of our biological approach, and provides a compelling explanation for the obvious limits of medication-based therapy.When I’ve spoken to more biologically-oriented colleagues about this computer model, some have responded, “Well, it all ultimately comes down to biology, doesn’t it?”—which in my opinion is misguided psychiatric reductionism. After all, a computer’s software all ultimately comes down to electricity, doesn’t it? Well actually, no–it doesn’t. Because the substrate of software isn’t electricity, but information–which can be encoded in any sort of medium, but happens to be commonly implemented in an electronic device. When it’s moved from one device to another, it can be sent by wire, optical fiber, or radio waves–and stored on magnetic tape or disc, optical disc, even punch cards if you’ve got the time and space. DNA storage and computation have already demonstrated that such processes can likewise occur in a system of flesh and blood. But when that happens, does that make it “biological”? When we have taught a pet dog how to beg for a treat, is that behavior biological? Is my current behavior writing this article biological? Is your act of reading this article biological?

Our Precarious Status Quo

Imagine then that you are delivered a marvelously complex supercomputer that contains customized circuitry unlike any other computer—utilizing not binary bits of 0 or 1, but likely a quaternary code for storage and processing, along with a discrete system of chemical messengers for intercellular communication. All of this is contained in a protective case of multiple layers, with the understanding that if you open the case and physically intrude into the circuitry, you will not merely void the warranty, but cause irreparable damage to the machine. Now tell me how it works. Quite an engineering challenge, isn’t it?

So, let’s concede once and for all that the brain-mind is absolutely nothing like the pancreas. The biological model has offered psychiatry the opportunity to redefine itself as a legitimately medical specialty, and to secure a new post-Freudian market niche as the sole dispenser of cutting-edge treatment for mental illness. Using diabetes as a metaphor obviously promoted that role, piggy-backing on an established medical diagnosis to legitimize not only our medical treatment, but psychiatric illness as medical diagnosis. This computer model, on the other hand, raises a whole lot of messy questions about how our “software” might be a source of psychiatric symptoms, thus bolstering the potential benefits of psychotherapy as a treatment—which could readily be provided by other disciplines besides medically trained psychiatrists.

Psychiatry’s tendency to cling to our technology du jour is ingrained in our character–a product of an inscrutable organ system, our insecure standing within the brotherhood of medical specialties, and the industry that financially sustains us. It’s hard to project an image of mastery and reassurance to our patients and our peers, when the honest answer to most of their questions would be “I don’t know.” It would be bad for business to let everybody know our true level of ignorance. And yet Socrates, the founder of Western moral philosophy, famously stated: “The only true wisdom is to know that you know nothing.” What could be a more damning indictment of modern psychiatry than that? 

Throughout its history, psychiatry has repeatedly hitched itself to dubious science, making vain claims of mastering the unknowable. That same history demonstrates that it takes us quite a long time to notice that we’re doing it yet again. But the impressive scientific progress described above is happening, and as you can see, it will likely raise significant questions about the scientific credibility of our current treatment model. Those doubts are bolstered by epidemiological studies that raise serious questions about the efficacy of modern treatment, and iatrogenic concerns as well–most notably the CDC Suicide Study released in 2018, which noted a 30% increase in the incidence of suicide in the United Stated from 1999 to 2016–while more Americans were receiving psychiatric treatment than ever before.

The Future Is Now

Suppose you had a software problem with your state-of-the-art MacBook, you brought it in for repair, and the technician insisted that they couldn’t correct the software problem–but offered a hardware intervention instead. Odds are this intervention wouldn’t work very well, you wouldn’t be a satisfied customer, and that technician won’t get many good reviews on Yelp. That’s what I fear modern psychiatry is offering our patients–and I suspect that as our scientific knowledge expands, the mind will again become relevant. Because our software problems often arise from the faulty programming that we inherit, or devise on our own–programs that may seem to have some adaptive purpose during a particular phase of our life, but become deficient as our life circumstances evolve. In the same larger sense, psychotherapy could consist of any sort of information or activity that can change our perspectives and/or our habitual responses–not just formal therapy, but learning, healing, feeling, changing. It could include exercise, spiritual practice, reading a book, the pursuit of wisdom, or conversing with a spouse. It could well lead to the development of therapeutic software interventions that we haven’t yet imagined. 

This exciting new research is leading us toward a wholesale reassessment of how the brain-mind works. Having already witnessed one revolution in psychiatry, I’m pretty sure another one is coming–after which many of our current assumptions and interventions will be regarded as less than state-of-the-art. (DBT will be a notable exception.) And as to what comes next? I think that topic is ripe for discussion….

References

Reber, P., (2010). “What Is the Memory Capacity of the Human Brain?” Scientific American, doi:10.1038/scientificamericanmind0510-70

Video Credit: Hye Yoon Park,Ph.D (2014), https://www.youtube.com/watch?v=6MCf-6ItoZg

3 “DNA computing”, Wikipedia, https://en.wikipedia.org/wiki/DNA_computing

4 “DNA data storage”, Wikipedia, https://en.wikipedia.org/wiki/DNA_digital_data_storage

5 Alexis Bédécarrats, Shanping Chen, Kaycey Pearce, Diancai Cai and David L. Glanzman (2018). “RNA from Trained Aplysia Can Induce an Epigenetic Engram for Long-Term Sensitization in Untrained Aplysia”, eNeuro 14 May 2018, https://doi.org/10.1523/ENEURO.0038-18.2018

6 Wolpert, S. (2018). “UCLA  Biologists ‘Transfer’ a Memory”, UCLA Newsroom https://newsroom.ucla.edu/releases/ucla-biologists-transfer-a-memory

Esmi L. Zajaczkowski, Timothy W. Bredy (2020). “Circular RNAs in the Brain: A Possible Role in Memory?”, The Neuroscientist Vol 27, Issue 5, 2021   https://doi.org/10.1177/1073858420963028  

Made with ❤
with Elementor

© 2021 Paul Minot MD All Rights Reserved