A Poem: “I Went to a Psychiatrist”

A Poem:
"I Went to a Psychiatrist"

I went to a psychiatrist

Because I was depressed.

I could not find my joie de vivre,

Nor could I get good rest.

A vision of calamity

Weighed heavy on my breast.

 

Relations with my wife had grown

Peculiarly tense.

All that I said perturbed her,

I felt always on defense.

In fury I destroyed a vase,

Which heightened our suspense.

 

My son and his had parted

And the grandchildren were pained.

Financial stresses added up

And thus my worries rained,

So I gave in to the darkness,

In my bedroom I remained.

 

“I’ll bet they have a pill for that,”

My daughter to me said,

And told me I should see a shrink

To fix my ailing head,

And with no other plan at hand

To treatment I was led.

 

When I came into his office

I was feeling rather bold.

I sat down and poured my heart out

So my tale was fully told,

But it wasn’t very long before

The doctor stopped me cold.

 

“That’s enough about your problems,

I don’t have such time to kill.

What’s the matter doesn’t matter,

All these symptoms say you’re ill,

So let’s get right down to business.

Time to choose your special pill.”

 

“I’m above inane emotions,

Always sober and aloof.

You’ve a chemical imbalance

Though I don’t have any proof,

I’m a doctor and I’m saying so

And that means it’s the truth.”

 

“I am skilled at polypharmacy

It’s what I’ve learned to do,

In case one pill won’t work,

I’ll add another drug or two.

Have a therapeutic cocktail

And a pharmaceutic stew.”

 

“This depression is no problem,

Your emotions I’ll unfetter.

With some Zoloft or Celexa

We’ll soon have you feeling better,

But if your love life’s still alive

This might well make it deader.”

 

“You’re most certainly bipolar

If such passions you must vent,

We’ve got lithium to treat you

If your kidneys are not spent,

And if they are, I’m sure some

Other drug will make a dent.”

 

“Like Tegretol or Topamax,

They both work pretty great,

Some Trileptal or Depakote

Might keep you more sedate,

And certainly Lamictal’s fine

If five weeks you can wait.”

 

“This pill might just be good for you

But might just make you fatter,

And if you shake there’s pills to take

To surely fix that matter,

But just in case, you ought to know

They could shut down your bladder.”

 

“They say Buspar helps anxiety

But I’m not sure that’s true,

The only meds that really work

Are really bad for you.

The DEA is on my back,

Hydroxyzine will do.”

 

I picked up the prescriptions,

But I never took a pill.

Instead I pondered my existence

While alone I climbed a hill,

And discovered deep acceptance

Once I’d had some time to chill.

 

My son’s divorce went well

And to this change I was resigned,

Then my wife confessed her fears

That I had cheating on my mind!

With tears we cleansed that matter,

Then our budget we aligned.

 

When I saw the shrink we never

Really touched on how I feel,

Or what I thought my problems were,

Or how I ought to deal,

He never did acknowledge that

The shit of life is real.

 

If you’re a shrink you might well think

That writing this is sleazy,

Such bitter tone and disrespect

Might make you feel queasy.

Shall I prescribe a pill for you?

It’s really easy-peasy….

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved

Psychiatry’s Inconvenient Truth: It’s Not Saving Lives

Psychiatry's Inconvenient Truth:
We're Not Saving Lives

In June 2018, the Center for Disease Control (CDC) released the results of a landmark study, in which all the suicides that occurred in the United States from 1999 to 2016 were recorded and examined. The results of this study should have been the biggest psychiatric news story since the advent of Prozac, but little notice was taken by either the national media or the public at large. This greatly suited the interests of psychiatric providers, and the industry that revolves around them—because the study vividly exposed the deficiencies of the medication-oriented model that dominates psychiatric treatment today. But to fully grasp the significance of this study requires some understanding of psychiatry’s struggles, and its recent history.

Part 1: The Age of Prozac

Today’s biological era of psychiatry blossomed in 1987, when Prozac—the first modern antidepressant—was introduced by Eli Lilly. In clinical practice, medications that cause a lot of unwanted side effects are commonly referred to as “dirty” drugs. Prior to Prozac, all antidepressant medications were unequivocally “dirty” drugs, with horrible side effects—not the least of which was lethal cardiotoxicity. In the few years that I practiced prior to Prozac’s release, I came to recognize what I bitterly dubbed ”the tricyclic cycle”, alluding to the most popular class of antidepressants at the time. Patients would be hospitalized for depression with suicidal thoughts or behavior, where they were stabilized on tricyclic antidepressants. After discharge they would take the medication for a while, but then stop it because of its side effects—most commonly insufferable dry mouth, but it could just as well be dizziness, constipation, sedation…like I said, they were really dirty drugs. Weeks to months later, they would get depressed again—then overdose on the unused bottle of medication, and get admitted to the intensive care unit for cardiotoxic symptoms—where I would see the patient in consultation, and admit them to the psychiatric unit—where they would be again placed on a tricyclic antidepressant. Every time I prescribed these medications for my outpatients, I felt like I was handing them a loaded gun.

Prozac was a tremendous improvement over these medications, if only because it was refreshingly nonlethal. An overdose usually led to no more than a case of the jitters, and almost certainly not an ICU admission. Its sexual side effects were annoying, but they were the least of problems associated with earlier antidepressants. Prozac became hugely popular—not really because of superior efficacy, but because of its safety and tolerability. A psychiatrist friend of mine who was prescribing it before I did told me, “Paul, it’s the first antidepressant that I would take!”

Depression, of course, is as common as dirt—but because of their risks and side effects, earlier antidepressants were generally prescribed to only the most severe cases. Prozac, however, blew the lid off the target population for consideration of medication treatment. With limited risks vs. its potential benefits, a trial of Prozac was deemed appropriate for many patients with garden-variety depression of a sort that never would have been medicated before.  Given the fact that placebo response rates in clinical trials of antidepressants range from 35 to 40 percent, a very good percentage of patients reported subjective improvement on Prozac. People that were never regarded as clinically dysfunctional sometimes reported an improved level of function on the medication—a phenomenon described as “cosmetic pharmacology” by Dr. Peter Kramer in his book, “Listening to Prozac”—which, by the way, spent 4 months on the New York Times bestsellers list. Prozac became a cultural phenomenon, the wonder drug of the ‘90s—replacing Sigmund Freud as the face of psychiatry.

Since then numerous antidepressant medications in the mold of Prozac have been released, paving the way for the expanded use of other medications—such as mood stabilizers, stimulants, even new generation antipsychotics—in patients who would never have been medicated in the past. An entire generation or two has grown up identifying psychiatry as a medication-oriented specialty, rather than the analytic image it had in years past. Although other medications are more frequently prescribed nowadays, we are nonetheless still living in the Age of Prozac—the drug that made psychiatric medication cool.

In 2013, an estimated 40 million Americans—16.7% of the adult population—filled one or more prescriptions for psychiatric medications. 12% of adults were on antidepressants, 8.3% on anxiolytic or sedative medications, and 1.6 % on antipsychotic agents. 15 million Americans have now been taking antidepressant medications continuously for at least five years. This rate has almost doubled since 2010, and more than tripled since 2000. Nearly 25 million adults have been on antidepressants for at least two years, a 60 percent increase since 2010. With such a vast increase of people in psychiatric treatment, it would be logical to assume that we would see improved psychiatric health, wouldn’t it?

Part 2: The Inconvenient Truth

In June 2018, the most significant psychiatric news story since the advent of Prozac came to light, but was barely noticed at all…because, you know, Trump. That was the release of a landmark study by the Center for Disease Control—the federal agency charged with monitoring the health of our nation—examining all the suicides that occurred in the United States from 1999 to 2016. Their most significant finding was the fact that over this 17-year span, suicide rates in the United States rose by 30%–from 10.4 per 100,000 people in the year 2000, to 13.5 per 100,000 in 2016. This suicide rate increased by about 1% a year from 2000 to 2006, and then by about 2% a year from 2007 to 2016.

76.8% of all those suicides were by men, who have historically been more prone to suicide. Over this time period, the suicide rate among men increased by 21%–while the suicide rate among women increased by nearly 50%. There was a shocking 70% increase in suicide for girls age 10-19, especially those age 10-14. Almost twice as many children were hospitalized in 2015 for suicidal thought or behavior than there were in 2008.  Suicide has become the second leading cause of death among those age 10 to 34, and the fourth leading cause of death for those age 35 to 54.

Of course, there are many psychosocial factors that may have contributed to this alarming rise in suicide. The declining economy, diminishing social safety net, and rising income inequality are certainly contributing factors. Changing attitudes and social mores, like the diminishing influence of religion, may have made suicide a more socially acceptable option than it used to be. The rise of social media may also be a factor, particularly among young females.

This increase in suicide rates was much more dramatic in rural areas—which confirms the likelihood that psychosocial factors are contributing significantly to this epidemic. The population in rural areas has become older, since many young people move away to live in urban areas. The economic downturn of the Great Recession has hit these areas harder, with many personal bankruptcies and closures of rural businesses. These factors combine to create a less vibrant economic and social culture in these areas, with loss of social cohesion and increased isolation. In short, rural areas have become an increasingly depressive environment. Yet access to mental health care is extremely limited in rural areas–and the lack of anonymity there discourages locals from pursuing treatment. There has also been an alarming increase in substance abuse in rural areas, especially opiates. The increased availability of guns in rural areas likewise increases the potential lethality of suicidal behavior.

Suicide rates nonetheless increased in all states, including the urban ones, with the singular exception of Nevada. It actually decreased there by 1%, which may in part be due to the fact that it’s a rural state that’s been experiencing a lot of urban growth.

54% of suicides occurred in people with no history of identified mental illness. Those without a known illness were more likely to be male, and to use a firearm. Dr. Joshua Gordon, the Director of the National Institute of Mental Health, maintains that, “When you do a psychological biopsy, and go and look carefully at medical records, and talk to family members of the victims, 90% will have evidence of a mental health condition.” It’s frankly hard for me to assess the credibility of this assertion—since “mental health condition” is a pretty vague term, and I’m not sure what “evidence” is more confirmatory than a completed suicide.

The Lead Researcher for the CDC Study, Dr. Deborah Stone, suggests that suicide transcends psychopathology–contending that it was “not just a mental health concern. There are many different circumstances and factors that contribute to suicide. This study points to the need for a comprehensive approach to prevention.”

Part 3: Psychiatry Strikes Back

In contrast to the passionate concern of public health professionals, the response of American psychiatric leadership to this report was a tiny collective shrug—obviously intended to attract as little attention as possible to this alarming study. Dr. Saul Levin, CEO and Medical Director of the American Psychiatric Association, proclaimed that “the data reinforce the need to fund and enforce laws ensuring access to mental health services,”—whatever the hell that means. The current President of the APA, Dr. Altha Stewart, issued a bland public service announcement: “People should know that suicide is preventable. Anyone contemplating suicide should know that help is available, and that there is no shame in seeking healthcare.” It is unclear how many people contemplating suicide actually heard this message, since it was posted on the APA’s website. Neither of these officials gave the least indication that this might be a failure on the part of psychiatry.

In an interview for Psychiatric News, “suicide expert” Dr. Maria Oquendo, a past President of the APA, rightly called for measures to secure handguns to reduce their availability for those at risk. She also called for providers to be “vigilant” in assessing suicide risk, and “proactive” in preventing recurrent psychiatric episodes in known patients.

In all my 39 years of training and practicing in psychiatry, in numerous work settings, I’ve never encountered any psychiatric staff who were NOT both vigilant and proactive in addressing suicide risk. That’s because in the practice of psychiatry, suicide is the archenemy. Medicolegally speaking, we are expected to keep our psychiatric patients from killing themselves on our watch. The entire treatment apparatus is designed to identify risk for suicide, and to prevent its occurrence.

Naturally, it’s an imperfect line of defense–because it ultimately depends on the honesty, intent, and resources of the patient. But every failure to avert suicide is worthy of clinical review, to determine whether or not it could have been averted. In psychiatry, it’s our best opportunity to save lives like other doctors do. And like any other specialty, we should always be improving our efforts in doing so—which means taking a good hard look at our failures, acknowledging them, and changing our practices if necessary. Dr. Oquendo seems to point the finger at unnamed individuals for not being careful enough—rather than acknowledging the likelihood that a disaster of this national scale, with so much of the public already under our care, could be a failure of our profession at large.

The article goes on to note that “suicide expert” Dr. Oquendo is engaged in research using PET scans and MRIs to “map brain abnormalities in mood disorders and suicidal behavior”, to “examine the underlying biology of suicidal behavior.” Upon reading this, I felt my head explode.

We are in the midst of an epidemic of suicidal behavior that exhibits prominent socioeconomic, demographic, and geographic trends. The existence of these obvious influences—hell, the existence of an epidemic itself—contradicts the notion that there is a significant anatomical component to suicidal behavior. My own clinical experience tells me that there are many unique paths that patients take to arrive at suicidality—too many to be accounted for with such a simplistic model. I will also go way out on a limb here, and propose that a 30-minute interview by a well-trained clinician would be infinitely more effective in screening patients for suicide potential, more available to those in affected communities, and much less costly than screening patients with MRIs and/or PET scans.

Dr. Oquendo’s research pursuits seem to me emblematic of biological psychiatry’s clueless departure from clinical realities. I see it as fiddling with neuroimaging while Rome burns. People don’t typically become suicidal because something happens to their brain—it’s usually because something has happened to their life—fear, despair, anger, loss, or trauma. And psychiatry’s reigning biological model of treatment habitually glosses over such issues.

Part 4: The Journey of Dr. Thomas Insel

Part 4: The Journey of
Dr. Thomas Insel

The only major psychiatric figure I found who even hinted at the possibility that this was a failure of our profession was the one responsible for the second most significant psychiatric news story since the advent of Prozac—a story which likewise was never brought to the attention of the general public.

In 2002, Dr. Thomas Insel took the position of Director of the National Institute of Mental Health, better known as the NIMH. He had already established his reputation within the circles of neuroscience research by demonstrating the efficacy of clomipramine in treating obsessive-compulsive disorder—one of biological psychiatry’s more convincing successes, in my opinion—and animal studies revealing oxytocin’s role in emotional bonding. As NIMH Director, Dr. Insel’s main claim to fame became his oversight of the infamous STAR*D study in the previous decade. “STAR*D” stood for Sequenced Treatment Alternatives to Relieve Depression, and was Dr. Insel’s attempt to establish “precision medicine for psychiatry”—that is, an evidence-based model to evaluate the relative efficacy of various antidepressants, and to establish a common treatment protocol. This study followed over 4,000 patients in 41 clinical sites over the course of 7 years, at a cost of $35 million. It was unique not only in its scale, but also because it engaged “real world” patients who weren’t screened out for substance abuse issues, medical illness, or other contaminating influences. It was also used to collect genetic data, in the hope of identifying biomarkers to predict antidepressant response and tolerance. Its findings were frustrating inconclusive, and yet entirely consistent with the experience of most psychiatric providers in the field—many patients dropped out of treatment, many were inconsistent with compliance, and no single medication or combination of medications was found to be measurably better than any other regimen. The most convincing finding of the study was the revelation that any patient who fails to improve on one antidepressant is very likely to fail on another antidepressant as well—which was already common knowledge to clinicians.

Since then, Dr. Insel has been vilified by much of the psychiatric community for this undertaking. Instead of establishing a clinically verified protocol for antidepressant therapy, it demonstrated on a large scale just how clinically suspect our treatment model really is. A whole bunch of research money was spent, only to prove that nothing we’re doing does much good.  It’s widely regarded as biological psychiatry’s biggest blown opportunity to demonstrate its effectiveness in treating depression. But fortunately for the corporate overlords of my profession, it went unnoticed by the general public.

Since leaving NIMH in 2015, Dr. Insel has become involved in the development of a cellphone app to assess psychiatric risk—by using data from a patient’s electronic medical record, combined with monitoring of their personal electronic activities. This certainly seems far removed from the biologically-oriented ambitions of his past. In a 2017 interview for Wired magazine, he reflects:

“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs….I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness. I hold myself accountable for that.

He’s certainly assuming a lot of personal responsibility for the failure of what was in fact an earnest effort to improve treatment—especially when you consider the entire industry he’s implicating in his description. But even more intriguing is his response in the New York Times, when asked for his opinion on the CDC Suicide Study:

 “This is the question that I’ve been wrestling with: Are we somehow causing increased morbidity and mortality with our interventions? I don’t think so. I think the increase in demand for the services is so huge that the expansion of treatment thus far is simply insufficient to make a dent in what is a huge social change. In contrast to homicide and traffic safety and other public health issues, there’s no one accountable, no one whose job it is to prevent these deaths—no one who gets fired if these numbers go from 45,000 to 50,000. It’s shameful. We would never tolerate that in other areas of public health and medicine.

Since leaving the NIMH, Dr. Insel has steadfastly avoided responding to his critics. But his enigmatic comments here suggest that he might be gently trolling our profession in the wake of these results—dropping hints that psychiatry might be to blame, and then walking it back to cover his tracks.

Part 5: Bringing Psyche Back
to Psychiatry

What is the highest concern of any medical specialty? Well, all medical doctors swear to the Hippocratic Oath, which maintains that above all, we should do no harm. More people have been treated than ever before—and yet more people are dying. This ugly truth suggests that we may be unwittingly doing harm—but our profession appears to have no desire to explore that possibility. It’s an established fact that antidepressant medications can paradoxically increase the risk of suicide in some cases. I also worry about the potential negative impact of labeling a patient with a psychiatric diagnosis, or glibly attributing depression to a mythical “chemical imbalance”—particularly in impressionable young patients.

Even if we’re not causing harm, it should be incumbent upon psychiatry to do what all other medical specialties make every effort to do—to make damn sure nobody dies from our diseases. And if a proliferation of medications is not doing that job, then we should be looking hard for other techniques to do so.

As I see it, this CDC study is calling our profession to task. We work in a specialty that treats the most complicated organ system in the human body, the brain-mind, and it’s our job as physicians to reduce morbidity and mortality. We’ve chosen to neglect the complexity of that task—settling for a simplistic treatment model that sells pharmaceutical products, promises a quick fix for complicated problems, and makes us psychiatrists feel more like “real doctors”—but is saving fewer lives. This is a public health emergency in our own territory, and preventing suicide is most certainly our business—and if we’re not going to assume the responsibility for rigorously combating suicide, then we have no claim to leadership in the field of mental health.

Dr. Daniel Carlat of Tufts University has been advocating reform of contemporary psychiatry for over a decade—and is best-known for publishing the Carlat Psychiatry Report, a medication newsletter combating commercial bias in drug research. He’s courted controversy within the psychiatric community, by testifying in favor of licensing psychologists (with appropriate training) to prescribe psychiatric medications. This is strongly opposed by psychiatrists, despite the fact that physician assistants and nurse practitioners have been prescribing psychiatric medications for many years with only paramedical training, rather than a full medical degree, and without any significant opposition. Dr. Carlat feels patients would best be served by single provider who is versed in both medication and psychological treatments—a sort of “one-stop shopping” model of care, more convenient and available than it is now. Psychiatry’s vociferous opposition to psychologists having this privilege seems transparently motivated by a justifiable fear—that because of their greater expertise in psychotherapy, psychologists might just do our job better than we do.

I support Dr. Carlat in this cause, and the evolution of psychiatry into a more inclusive and available model of care—a sort of primary psychiatric care provider, offering screening and treatment that is less expensive, more comprehensive, and more available than it is today. Not because it would be good for my profession, but because it would be better for people in need. In response to the CDC study, Dr. Christine Moutier, Medical Director of the American Foundation for Suicide Prevention, notes that, “We need to be teaching people how to manage breakups, job stresses. What are we doing as a nation to help people to manage these things? Because anybody can experience those stresses.” Yes, even people with perfectly normal brains.

In 1808 a German physician named Johann Christian Riel coined the term “psychiatry”, the Greek roots of which literally translate to “medical treatment of the soul.” It’s an ironically romantic term for what our profession has become today. It’s also a paradox of sorts—the application of secular technologies to heal something that most of us see as ethereal, or even sacred.

This paradox has bedeviled psychiatry throughout its history—two competing schools of thought battling to define our discipline, and our role as healers. One is biologically oriented, focused on the anatomy and physiology of the brain, and wedded to more conventionally medical interventions. The other has a cognitive orientation, focused on understanding and treating that abstract entity known to us as the mind. Over the past 40 years psychiatry has been increasingly dominated by the biological school—initially triggered by significant technological breakthroughs in our understanding of brain physiology, but subsequently highjacked by a corporatist alliance of the insurance industry, hospital industry, and Big Pharma—and then legitimized by the purchased collusion of academic psychiatry. All this has led us to our current resting place—complete neglect of the “soul” that gave psychiatry its name.           

These pendular shifts in orientation haven’t occurred because either school of thought has been proven to be more clinically valid. They’ve happened when one school becomes more marketable than the other.  They are a natural consequence of the essential duality of the brain-mind, and our extremely limited understanding of its physiology—creating an academic environment that lends itself to a “gold rush” mentality. The public greatly overestimates the amount of hard knowledge we have, because we’re always overselling ourselves and our chosen tools. Biological psychiatry is now a brand, just like Freud used to be a brand—but now we have a wider array of products to sell.

Neither school of thought has been that successful at conquering mental illness. If we’re ever going to successfully treat psychiatric disorders, we’re going to have to acknowledge our need for a complete understanding of its organ system—which for each side means a better understanding of what those other guys know. I think this ingrained competition for academic and market dominance has blinded both sides to an obvious truth: Psychiatric disease by its very nature is eclectic disease—and its most effective treatment invariably calls for a truly eclectic treatment model.

Psychiatric disorders are in fact a greater mystery than we can truly grasp. This study makes painfully clear how impotent our current biological model is in addressing the problem of suicide, and it implores us to consider a more sensitive and psychological approach in dealing with this critical issue. 

Everyone who’s worked in the treatment of alcoholics is familiar with the term “rock bottom”—that point in an alcoholic’s life when they’ve done enough destruction to their own life, and inflicted enough pain on themselves and others that their denial is finally overcome, and they finally recognize that their only path forward is to give up demon rum. In the wake of this CDC study, I think this would be a good time for psychiatry to take a hard look at what it’s been doing for the past few decades—what good we’ve done for our patients, and what good we haven’t done–and call it rock bottom. The alternative is blindly charging onward in this same direction, and seeing just how high the suicide rate can go. We’ve been drunk on the biological model for too long. It’s not working out well at all—and it’s high time we took the cure.

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved

Psychiatry’s Mission Impossible

Psychiatry's
Mission Impossible

(+ RELATED PODCAST)

Most of my work is aimed at discrediting the current biological model of psychiatric treatment—which is supported by bad science, and corrupted by financial interests. But the sad truth is that psychiatry has always been inherently prone to such corruption—simply because its scientific challenges are so very daunting, and the public desire for successful intervention in psychiatric disorders is so desperate. So, before we begin to ponder psychiatry’s many sins against science, let’s first give full consideration to the peculiar challenges it faces as a medical specialty.

Psychiatry’s main anatomical focus is the brain—an organ entirely encased in bone.  Underneath the bone are layers of fibrous tissue and fluid that cushion the brain, all of which are vulnerable to infection if intruded upon.  The brain itself is a fabulously complex array of about a hundred billion nerve cells (neurons), each with numerous junctions connecting it to its neighboring cells.  Cells communicate between each other across these nerve junctions (synapses) through the secretion of chemical messengers known as neurotransmitters. Each neuron has an average of about 7,000 synapses. There are over 100 different neurotransmitter agents identified in the human brain–each of which may have either an excitatory or inhibitory effect on the postsynaptic cell, depending on what kind of receptor protein it contacts in the cell membrane.  The location of this intercellular communication is in the synaptic cleft, the microscopic space within the junction which is crossed by the neurotransmitters. Here the balance of neurotransmitters is constantly adjusted by the two cells through the processes of release, metabolism, and reuptake—which in turn are regulated by an elaborate feedback network incorporating input from other neurons as well. 

In short, the raw circuitry of the brain is microscopic, profuse, and unimaginably complex. And every brain is unique!  

The physiological tasks of brain cells are largely determined by their location within the brain—and the higher functions associated with thoughts and feelings are particularly inscrutable, since they occur within a microscopic assemblage of neurons acting in a meticulously coordinated fashion. Hence studies of brain cells in vitro (i.e. outside of the body in a laboratory medium) tell us little about their psychiatric function. This leaves us with the necessity of studying brain cells in vivo (in the living organism) to gain an accurate understanding of their function.  But doing so would require passing a needle past the skull and through the surrounding nerve tissue, causing irreparable damage to the brain since neurons have little if any capacity for regeneration.  This makes direct observation of living brain tissue ethically unacceptable—and even if it wasn’t, how many people would give informed consent to participate in such a study? 

The other medical specialties (besides neurology, of course) focus on organ systems that are infinitely less complicated than the brain, more physically accessible, and able to withstand a needle biopsy without irreparable loss of function.  Chemical markers associated with these systems are typically measurable in the peripheral blood, unlike those of the brain. Other intrusive diagnostic procedures such as endoscopy are available as well.  Access to this sort of information allows physicians to be reasonably certain what’s going on inside the patient—a feeling dreadfully unfamiliar to any prudent psychiatrist.

Before one even contemplates these anatomical and physiological complexities, there is the conundrum of its duality—the brain in the corporeal world, the mind in the ethereal.  Like astrophysics, neuroscience is an area of study that raises philosophical and spiritual questions, provoking the sort of controversies that are attendant to such concerns. In the realm of medical science, the brain-mind stands out as a uniquely remote wonder, a bottomless enigma that we’ve barely begun to crack.  In point of fact, the secrets of the brain-mind constitute a last frontier far more scientifically daunting than astrophysics—which, after all, is just the study of a bunch of dumb particles that happen to be very far away.  It’s only fair to acknowledge the onerous scientific challenges that psychiatric researchers confront in trying to penetrate this ironic, existential mystery.

Psychiatry’s sense of futility is further aggravated by the likelihood that once a psychiatric disorder does become treatable, it will be reclassified as a non-psychiatric disease.  In the 19th Century a large proportion of asylum inmates were diagnosed with general paralysis of the insane (GPI)—a psychiatric disease characterized by manic symptoms or other behavioral problems, followed thereafter by the onset of dementia and progressive paralysis.  It was noted that this presentation was more frequent in men, especially those with “debauched” lifestyles.  Eventually this problem was identified as neurosyphilis, an advanced stage of syphilis that takes ten years or more to manifest itself in infected individuals.  Once antibiotic treatments were developed it became a “medical” illness, and thus no longer the concern of psychiatrists.  Similar paths were followed by epilepsy, the thiamine deficiency and hepatic encephalopathy associated with alcoholism, Parkinson’s disease, Huntington’s chorea, and other neurodegenerative diseases.  It would seem that we are in part defined as a specialty by our ineffectuality, since any disease that can be readily treated becomes someone else’s responsibility.  This sequence pretty much dooms us to persistent clinical failure–a reality of psychiatric practice that at times can be quite demoralizing.  

With so much to prove to our patients and peers, and a dearth of reliable scientific information, psychiatry has time and again compensated for the deficiency of its knowledge base by simply making shit up.  The unfathomable nature of our calling conveniently lends itself to grand fabrications—and when patients bring us uneasy questions about what we’re doing and how it works, almost any answer seems more satisfactory than yet another “I don’t know.” Consequently, psychiatry has been prone to spasms of radical reinvention over its history, as one brand of pseudoscience is replaced by another in a desperate attempt to cover up our gaping ignorance. These “breakthroughs” have generally swung between two opposing modes of characterizing psychopathology—either a biological orientation focusing on the brain, or a psychological orientation preoccupied with the mind.

At this time in psychiatric history, our understanding of the brain certainly exceeds our physiological understanding of the mind—which is nil. I’m completely forgiving of our ignorance in this regard. What I can’t forgive, however, is our refusal to acknowledge that ignorance in our clinical practice–and our vain, corrupt promotion of biological half-measures, as if they were clinically and ethically sufficient.

My contention is that in order to improve our success in psychiatric treatment, and to minimize the unintended harm we are inflicting, we need at last develop an eclectic array of interventions that address the eclectic nature of psychiatric disorders. Treatments that incorporate not only what we know about the brain, but what we know about the mind. We may not understand the underlying physiology of how the mind works, but we do have plenty of knowledge about how human beings work—and it’s foolish and inhumane to not make that sort of knowledge a necessary part of psychiatric intervention.

For thousands of years, people have been overcoming anxiety and depression by pursuing emotional growth. In my opinion, no psychiatric patient should be deprived of assistance in exploiting that innate capacity for meaningful, lasting improvement in the course of their treatment. I see nothing in our current model of care that acknowledges that capacity, much less utilizes it. I think it should be our duty to do so, if patient health is in fact our goal.

But there’s one major sticking point in my proposition—yet another quandary, one that impedes the aggressive funding of research to develop new models of psychotherapy for common psychiatric disorders:                  

How the hell are corporations going to monetize it???

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved

Artificial Afterglow: SSRI’s Exposed!!!

Artificial Afterglow:
How SSRIs
Might Actually Work

Many of the most popular antidepressants—like Prozac, Zoloft, and Celexa—are classified as selective serotonin reuptake inhibitors, or SSRIs. In the nerve junctions of the brain where the neurotransmitter serotonin is released, it’s typically reabsorbed by nerve cells as a sort of feedback mechanism to regulate the amount present in the synaptic cleft—the space between two neurons. SSRIs act by decreasing this reabsorption of serotonin, which results in a net increase in the amount of serotonin in the cleft.

SSRIs aren’t just used to treat depression. They’re also prescribed for chronic anxiety, and to treat obsessive-compulsive disorder. However, they have significant side effects, the most frequently annoying of which are their sexual side effects. SSRIs typically decrease sexual desire, inhibit sexual arousal—and most frustrating of all, can stop you from being able to attain orgasm. All SSRIs seem to be equally problematic—these sexual effects seem to be tightly bound to their therapeutic benefits.

Over 20 years ago it occurred to me that this might not just be a coincidence, but rather a clue as to the therapeutic mechanism of these antidepressants. I’ve always been intrigued by the mechanism in which birth control pills work. For those of you who don’t know, the reason we don’t have a birth control pill for men is not because of sexism in the health industry—but rather because women happen to have a natural state of infertility, and men don’t. This state of infertility is pregnancy—when ovulation is suppressed by the body in order to preserve the endometrium of the uterus to nourish the fertilized egg that’s implanted there. Birth control pills contain the hormone that promotes gestation, i.e. progesterone—thus imitating the state of pregnancy.

I pondered the array of therapeutic benefits that SSRIs offered—improved mood, decreased anxiety, diminished obsession—and their peculiar sexual side effects, particularly inhibition of orgasm—and it seemed to me that this confluence of effects was more than coincidental. Then it dawned on me that there is in fact a natural state in which we humans exhibit all these phenomena—the refractory stage of sexual response, aka “afterglow.” You know what I mean (or I certainly hope you do)—that time after the act when we probably feel more relaxed, happy, and at peace with the world than any other time in our life. And I wondered if indeed serotonin might just mediate that phase of sexual response. But since that time I haven’t since heard that connection suggested in any psychiatric literature.

As you might expect, SSRIs have been found to exert their influence where serotonin is most prevalent in the brain—which is the raphe nuclei of the pons. This is located in the brainstem just north of the spinal cord—which is nowhere near the higher cognitive centers of the brain. The brainstem is the home of the limbic system, commonly referred to as “the lizard brain”—the seat of our primitive emotions, directing us to choose among any one of the “five F’s” of emotional response: “fight, flee, freeze, feed, or f**k.” And sure enough—along with the hormone prolactin, serotonin has indeed been implicated in mediating the refractory phase of sexual response.

The oldest citation I could find postulating this role of serotonin in sexual response was an archival article that’s currently posted on the website of the National Institute of Health. It’s an article that was first published in the British journal Behavioural Brain Research in 1984—3 years before the release of Prozac, the first SSRI, in 1987. It’s intriguing to me that Prozac was vigorously promoted, by Big Pharma and clinicians, using the unsubstantiated assumption that it was correcting some mythical “chemical imbalance” that was causing depression. For many years we psychiatrists held out hope for the development of an SSRI that didn’t cause these sexual side effects—when the evidence against this possibility was right before our eyes! Nowadays there’s a consensus among sexual response researchers that serotonin functions as a sort of hormonal “brake” applied to the process of orgasm, and that it may trigger release of the hormone prolactin. There is precious little evidence that serotonin has any other role in the control of mood—and numerous studies aimed at identifying any serotonin deficiency in the brains of chronically depressed individuals have failed to do so.

It’s worth noting that only 5% of the body’s serotonin is in the brain—the remaining 95% is in the gastrointestinal tract, where it is involved in the neurological regulation of digestion. This not only accounts for the GI side effects of SSRIs, but is also postulated to explain how we tend to feel our emotions in our guts, and how emotions affect GI function. Anyone who’s ever had a pet cat and moved from one residence to another can testify that this is not a peculiarly human phenomenon. This connection confirms the primitive origins of serotonergic responses, disengaged from the higher cognitive functions that almost certainly have a role in depression.

All this leads me to the conclusion that rather than correcting any underlying chemical imbalance in the brain, SSRIs probably act by creating a chemical imbalance that masks psychiatric symptoms, by triggering the instinctual psychological effects associated with postcoital afterglow—not unlike the way birth control pills trick the female reproductive system into thinking it’s pregnant. Unfortunately, these effects are not experienced with the intensity associated with actual post-orgasmic bliss—perhaps because of the endorphins that are released in intercourse during the plateau period prior to orgasm, and/or the prolactin that is released along with serotonin during the refractory period.

I can’t help but suspect that a whole lot of smart people have been playing dumb about the true nature of serotonin’s role in the brain for many, many years. The evidence of serotonin’s role in sexual response was in plain sight prior to their marketing campaign—so it seems to me that Big Pharma and academic psychiatry chose to pretend that they were fixing something that was broken, rather than acknowledging the likelihood that the efficacy of SSRIs was based on a cheap neurophysiological gimmick. Pursuit of a “healthy balance” is so much more marketable than manufacturing some instinctual delusion of wellbeing. I still prescribe these medications, but I don’t bullshit my patients into thinking that there’s any chemical imbalance to fix.

I am, however, unequivocally concerned about the overuse of antidepressants today. I’m thoroughly convinced that they have undesirable effects that need to be studied further—especially in long term use, and when prescribed to young people. But my overarching concern about antidepressants is the manner in which they are oversold and overutilized as an alternative to actual exploration of feelings and psychosocial stressors—in a manner that ignores the holistic value of sadness, and is dehumanizing to both my profession and its patients. I believe that the thoughtless manner in which antidepressants are used today is a cheap, lazy, and ultimately ineffective way to address the complexities of human existence—the cumulative result of modern psychiatry’s economic and scientific corruption, and a spiritually empty worldview.

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved

How to Think Like a Scientist

How to Think Like a Scientist

(and Why Psychiatrists Don't)

Science is the study of nature—and perhaps the most challenging wonder of nature is our own brain-mind. It’s unimaginably complex, entirely surrounded by bone, terribly vulnerable to intrusion—and no two of them are alike! All these factors combine to make psychiatry’s task of understanding its subject far more difficult than that of other medical specialties. For centuries, in the face of our enormous gaps in knowledge, psychiatry has tended to grasp at slender reeds of evidence, and then play fast and loose with science—ambitiously concocting half-baked theories, in a vain effort to assert mastery of a scientifically impenetrable mystery.

In order to grasp just how far psychiatry’s brand of science has strayed from established scientific fact, you first need to understand the scientific method—which is how all scientific knowledge is obtained and verified.  Physicist Jose Wudka describes the scientific method as “the best way yet discovered for winnowing the truth from lies and delusion”—in other words, a sort of intellectual filter specifically designed to eliminate all bullshit.

The birth of the scientific method is credited to the great Arab physicist and mathematician Ibn al-Haytham, who in the early 11th century performed rigorous experimentation while studying optics.  For a thousand years it has prevailed as the prescribed manner in which any working assumption is examined and validated.  The steps of the scientific method are as follows: 

  1. Observe and describe a phenomenon.
  2. Formulate a hypothesis to explain the phenomenon.
  3. Use the hypothesis to predict outcomes.
  4. Test the hypothesis through experimentation and/or further observation, and modify the hypothesis in light of the results.
  5. Repeat Steps 3 and 4 until there are no discrepancies between your hypothesis and the results.

When a hypothesis has been run through this mill over and over, demonstrating its validity to the point that it’s accepted as proven by a consensus of the scientific community, it is then called a theory—a conceptual framework that’s used to explain existing observations, and to predict new ones.  Such theories, like the theory of evolution and the theory of relativity, function as jumping off points for the creation of more hypotheses, further observation and experimentation, and the continued expansion of the body of scientific knowledge.   

Please note that this use of the word “theory” is very different from the way that we use it in everyday language, where it conveys significant doubt and speculation.  This common use of “theory” actually describes what we would call a “hypothesis” in scientific terms—an unverified idea based on speculation. This ambiguity has contributed significantly to the public’s confusion about science today. While in science no theory is unquestionable—because almost nothing in science is actually unquestionable—a theory is defined as a proposition that’s already been verified as true after extensive scientific testing, and is now used as a foundation for further study. Critics of science have exploited this ambiguity to cultivate disbelief.  After all, the theory of evolution is “just a theory”—and if your personal definition of “theory” is a dubious supposition rather than a generally accepted fact, then a scientist’s unqualified endorsement might seem imprudent. But it isn’t. Theories are NOT imprudent.

The single most ignorant and misleading claim spouted by science’s opponents is that science is a faith in itself—when nothing could be further from the truth. Properly done, science is the antithesis of faith—because its guiding purpose is to question perceived truth, rather than accept it. People of strongly held religious faith despise science for its rejection of faith –which is, by definition, belief in the absence of evidence. But there’s really no choice for scientists in this matter–because scientifically speaking, belief without evidence is nonsense.

When religious opponents of evolution promote the concept of intelligent design—a feelgood hypothesis that maintains that there must be an engaged Creator because it sure seems like there is one—they start with an unobservable phenomenon, God, that is accepted without evidence—then scorn any effort to dismiss its existence. Their sole intention is to reaffirm faith, rather execute the skeptical work of science. In contrast, the theory of evolution is validated every time a drug-resistant strain of bacteria emerges, without us even looking for any more proof. 

But bogeymen of the culture wars are not the only enemies of science.  Like any other human enterprise, science is corruptible, especially when there’s big money is at stake. And in a time when psychiatry most lays claim to being based on science, it has done so by shrouding itself in pseudoscientific myths, to create the illusion of precision where there is none.

Ibn al-Haytham foresaw the corruptibility of the scientific process. As he put it, “Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things.” The sad truth is that most of psychiatry’s scientific knowledge today has been in a state of developmental arrest, stuck on a warped rendition of Step 3, which could be restated as: “Use the hypothesis to market psychiatry and its products”.  Like the advocates of intelligent design, most of psychiatry’s research institutions have been bent on producing data that supports a myth—specifically, the one that psychiatric disorders are caused by chemical imbalances, which are in turn resolved with psychiatric medications. Instead of scientifically scrutinizing this hypothesis, marginal findings are accepted as confirmatory and inflated in significance, so they can be used to generate pharmaceutical sales pitches.

One of the main reasons that psychiatry has abandoned this essential skepticism is because we have so little knowledge about how the brain-mind actually functions—and yet we need the illusion of knowledge in order to promote our products and services. The void in our scientific knowledge of the brain-mind is astounding. If you ever asked a cardiologist, “Physiologically speaking, what is a heartbeat?”, they could probably bore you to tears with details in explaining how it all works. But if you ask a psychiatrist this entirely pertinent question—“Physiologically speaking, what is a thought?”—the only honest answer would be, “We have no freaking clue”. Because we don’t. THAT is the most relevant measure of psychiatry’s scientific knowledge I can think of. We don’t know how the brain-mind executes any of the higher functions that are the actual focus of psychiatry, the generation of thought and behavior—and so the bulk of our psychiatric “science” to date is mucking around finding medications that cause desired effects in a brain-mind, when we really have no idea how it all works!

 With their embrace of technology—defined as “the application of science for practical purposes”—psychiatric researchers display the trappings of science, which is enough to impress much of the public with their efforts. But in fact, the modern myths of psychiatry are more thoroughly sustained by faith than by hard scientific proof.  Most of us use smart phones—a very advanced technology—regularly in our day-to-day lives, but I suspect very little of that time is spent doing hard science. Likewise, countless millions in research dollars have been spent amassing evidence to prop up psychiatry’s biological model and promote pharmaceutical products, rather than rigorously examining the scientific validity of its assumptions. This is a brazen neglect of the guiding precepts that have been at the foundation of scientific study for a millennium. In short, they’re baffling us with bullshit, more intent in generating pharmaceutical ad copy then establishing scientific fact.

You will find little in my work that attempts to assert any hard scientific truths—because the determination of scientific truth occurs through the exhaustive efforts of a community, rather than the musings of an individual.  I’m limited to exercising my scientifically driven skepticism—to doubt everything until all doubt is removed, shooting intellectual spitballs at institutions that may have less to do with real science than I do. Because they are no longer primarily engaged in the pursuit of truth, or driven by scientific skepticism. Ibn al-Haytham has provided me some cover for this mission, as he states:

The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and to attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.

If you question this wisdom, remember that this is a guy who formulated an idea that is still with us a thousand years later. So, in that spirit, I invite you all to apply critical thinking anything that I might say in my videos or elsewhere—but to remember that doing so also requires a rigorous examination of your own beliefs. The scientific method demands a whole-hearted embrace of skepticism—because that’s the only way to establish that the truth you hold will be one that endures.   

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved

What Is a Thought??? A Biodigital Hypothesis

What Is a Thought???

A Biodigital Hypothesis

If you ever asked a cardiologist, “Physiologically speaking, what is a heartbeat?”—they could bore you to tears with details on anatomy, electrophysiology, hemodynamics, innervation, pulmonary and peripheral circulation.  But if you ask a psychiatrist this entirely pertinent question– “Physiologically speaking, what is a thought?”—the only honest answer would be, “We have no freaking clue.” Because we don’t.

That’s the most relevant measure of psychiatry’s scientific knowledge that I can think of. We don’t know how the brain-mind executes any of the higher functions that are the actual focus of psychiatry, the generation of thought and behavior—and so the bulk of our psychiatric research to date has been focused on finding medications that cause desired effects in the brain-mind, when we really don’t know anything about how its core functions manifest themselves.

I began psychiatric training in 1981, when the reign of psychoanalysis was yielding to the current biological wave of psychiatry. The promise of that movement is now fading—in the face of complicated questions about efficacy, corruption of our science, and iatrogenic harm done to patients by our medications, and even our diagnostic labels. Psychiatry is now under sustained attack, and for the most part its response has been a defensive crouch—fighting to maintain an illusion of mastery, when in fact what we are treating is a mystery.

You might be distracted by my references to the “brain-mind.” It’s not an affectation, but rather an acknowledgment of plain truth. The biological model of psychiatry that’s prevailed for the last several decades has prospered by focusing on the brain as an anatomical entity, while neglecting the obvious—that thought exists, and has a more than considerable influence on the disorders of perception and behavior that we identify as psychiatric illness. My composition of this article can’t be accounted for by mere shifts in my balance of norepinephrine, serotonin, acetylcholine, GABA, and dopamine. Something much more subtle and marvelous must be in play. But because we don’t know what it is, biological psychiatry ignores it—vainly neglecting the enigma at the core of our profession, to the point of rank denial. So, unless I’m discussing anatomy or neurophysiology, I use the term “brain-mind” to acknowledge this ignorance, and thus maintain an appropriate level of humility and wonder.

Our ignorance of the physiology of cognition is of course understandable. The human brain is fabulously complex, composed of about 100 billion neurons and entirely encased in bone. Physical intrusion into the brain is highly likely to cause irreversible damage, with significant risk of lethal infection. Using contemporary techniques to unlock these secrets would be grossly unethical, and probably not that fruitful anyway—since every brain is unique. Ironically, our mind may be one of the last frontiers of human understanding.

But just because we can’t do hands-on research to establish the physiology of thought, that doesn’t mean we can’t apply some rational speculation to better conceive of and understand the nature of the mind. Physicians in other specialties turn to mechanical models to help them conceptualize and study the function of their organ system.  Cardiologists study the heart using computational models derived for mechanical pumps, and nephrologists use models based on mechanical filters. So, if you were going to study the functions of the brain, what kind of technology would you use as a model?

Well, a computer, of course. In my research on the subject I’ve found references contesting this analogy on rather irrelevant fine points. Such models aren’t exhaustive, but they do help us to conceptualize and study the organ systems in question. The brain takes in information—processes it—and then uses it to exert control over our body, and to act upon our environment. And every time we have some sort of product and we want to update it with a similar set of capabilities, we stick a data processor on that product and then market it as “smart”—like a brain, right?

Conceptually speaking, the defining characteristic of a computer is its marvelous ability to use information to construct virtual machines—programs, or “apps”—that are in turn used to process other information. This technological ability is of course a recent development in human history—but it seems clear that the capacity of information to be used to process other information wasn’t invented, but rather discovered by man. Do you think that this capacity was just laying around waiting to be discovered by us? Or isn’t it quite possible, or even likely, that this capacity is already being exploited by all sentient life forms—and perhaps some non-sentient life forms as well? The likelihood that this emerged in the process of evolution, to be utilized in nature, is no more amazing than all the other wonders of biology we find. If any process or niche exists, life always seems to find a way to exploit it. And there’s simply no better way to account for the brain’s myriad functions than to assume that it might be utilizing this capacity as well.

At its most basic level, a computer consists of relatively few pieces of specialized hardware with a lot of redundant structure, constructed to contain and utilize a complex architecture which is constructed entirely of information. This information is stored as huge arrays of binary code—each unit of which is commonly represented as a choice of either “0” or “1”, and referred to as a bit. This data is coded and preserved as complex patterns within the storage media of the computer, where it can be utilized by the hardware to execute any manner of complex tasks. Just consider the variety of functions that can be performed by these constructs of data on your cell phone—music composition, financial management, videography, communications, etc.—all working within the architecture of the operating system, likewise constructed entirely of information. All this information can be transferred through and to a variety of media—wire lines, optical lines, magnetic tape, radio waves, magnetic discs, optical discs—even punch cards if you’ve got the time and storage space. The medium used to transfer or store the information is irrelevant—other than how efficiently, conveniently, and reliably it does so. No matter what medium you’re using, the information is the same. So why couldn’t such information be stored in a system of flesh and blood?

Structures analogous to the components of a computer can be roughly identified within the brain. There is the organic “hardware” of the brain at large. The “firmware” consists of the structures of the lower brain that manage noncognitive functions, such as coordination of motor function, processing sensation and expression, maintaining homeostasis, and processing the storage of memory—the basic functions of the brain that we are born with, allowing us to breathe, feed, and otherwise interact with the environment. Our brain-mind has peripherals as well—our eyes, ears, hands, speech, and all other organs of sensation and expression.

But the most mysterious component of our brain-mind is the “software” wonder of cognitive processing in the cerebral cortex. Imagine the millions of molecular shifts that must occur there for each moment that we spend pondering our problems, contemplating our future, recollecting our past pains and pleasures, reading a poem aloud, or enjoying a good film. I’m an agnostic myself—but if there is in fact a divine spark within us, it rests here in the middle of this miracle. It’s a miracle that goes on and on….one that includes me at this moment writing this article, and you at your moment of reading it. It’s the mind, the seat of our consciousness—the domain of thoughts, hopes, worries, dreams, and to-do lists. It’s where we make decisions on what to eat, whom to marry, what job to pursue, when to go to bed, and whether we’re going to try an antidepressant.

But there’s no space for this miracle, no place for the mind, in the biological model that dominates psychiatry today. Because it’s not deemed relevant to the selection of a medication. Because it’s messy, inconvenient, and time-consuming to deal with it. But most of all, because we have no idea as to how it all works.

At this time, our entire biological understanding of the “software” of cognition is limited to the rough equivalent of a “bit” of memory—the molecular alteration of messenger RNA during the acquisition of memory in the hippocampus. This real-time video, made by researchers at Einstein University in 2014, demonstrates fluorescently labeled beta-actin messenger RNA traveling from the nucleus through the dendrites of a brain cell in the hippocampus of a mouse, in response to stimulation with light. This confirms the hypothesized role of such mRNA in memory storage.

This involvement of mRNA in the storage of memory supports the hypothesis that thought could be based on the sort of digital processing we see in computers. Note that mRNA’s primary biological function is the transmission and translation of the genetic blueprint of DNA—which holds the entire database of life in a code constructed of four discrete nucleotides. This code is quaternary, instead of binary like the code used by our computers. A quaternary system of processing is entirely possible; binary is simply our industry standard. So isn’t it logical to assume that the same digital information system that life relies upon already for reproduction could be evolutionarily adapted for processing cognition?

I know perfectly well that this hypothesis is highly speculative, and proof thereof is far beyond my capabilities. But the principles of science are not based on the assumption of knowledge—they’re based on the assumption of doubt. And the reason I’m advancing this model is not to presume hard scientific knowledge, but rather to promote hard scientific skepticism, to be applied to the woefully inadequate assumptions that drive our current treatment model.

We are living in an era where the remedies most available to our patients are crude biological interventions that neglect the full nature of psychiatric disorders. They produce their effects by activating chemical messengers in rather primitive corners of the lower brain, bypassing any regard for the role of cognition in psychopathology. The limits of their benefits became increasingly obvious upon release of the CDC’s study on suicide in June 2018 . This study documented a 30% increase in the U.S. suicide rate from 2000 to 2016—right here in this Age of Prozac, with more people receiving diagnoses and treatment for psychiatric disorders than ever before.

Our personal software system—all our cumulative thoughts, memories, and feelings, our cognitive identity—has a name. It’s the psyche, defined as “the human soul, mind, or spirit.” There once was a time when psychiatry used to treat it. But instead of being appropriately humbled by this yawning void in our understanding of the brain-mind, contemporary psychiatry has instead chosen to fall in love with the relatively modest gains in biological knowledge that we’ve seen over the past few decades. This infatuation has been fueled by numerous financial interests. It’s been hyped by high hopes, scientific indiscipline, and misinformation that greatly exaggerates our understanding of what we do.

If you have a computer with contaminated data or some other software problem, there are interventions to consider. They might include downloading a software patch from a website, removing a virus, reformatting and reloading a hard disc, or reinstalling the operating system.  In short, we need to modify the information on the computer. So how can we modify the software of our brain-mind?

Well, external modification of data in a computer requires the input of new data by using a peripheral—such as a keyboard, DVD drive, or modem. Our brain mind can likewise use our organs of sensation to obtain information from the environment. Have you ever read a book, or had a conversation, that had a profound effect on how you perceive your life? Have you ever seen a movie that changed how you felt about its subject matter? Do certain songs trigger certain thoughts, feelings, or memories? All these examples reveal the psychic power of external information—most clearly demonstrated by the onslaught of negative information that floods the sensations of someone experiencing a major traumatic event—enough information, sadly, to change one’s mind, and life, forever.

A well-established term for “software” intervention of the brain-mind already exists. It’s “psychotherapy”. I’m not convinced that passive and “neutral” models of psychotherapy are the most efficient remedy—sessions in the vein of life-coaching might be more efficient and more available, teaching mature coping strategies to patients who may not have had good “programming” in their upbringing. Such a model could be made more available to the rural areas, where suicide is more prevalent. This is an especially urgent need because findings of the CDC’s suicide study suggest that much of suicide occurs not because something happens to your brain, but because something happens to your life. It would be interesting to see just how much we could improve the efficiency and efficacy of psychotherapy, if we devoted only a portion of the financial resources that we now spend developing psychiatric drugs for the marketplace.

It’s often said that if the only tool you have is a hammer, then everything looks like a nail. I can’t think of a more penetrating illustration of this adage than contemporary psychiatric practice. Because of our ignorance of cognitive physiology, we hide behind the biological model of psychiatry—where the products are easily marketed and sold, requiring little investment of time or effort from either party. We avoid the obvious fact that psychiatric disorders are inherently eclectic disorders that engage both the brain and the mind. And since the mind appears to be even more individuated than the brain is, and perhaps more complicated as well, effective treatment would certainly require a wider array of interventions than we have now.

If we simply acknowledge that the stuff of thought could be created from something as abstract as information, then perhaps we can see the folly in what we are expecting from our biological interventions. If you had a software problem with your laptop and brought it in for repair, and the technician insisted it was a hardware problem and offered you a hardware solution that didn’t work, you wouldn’t be a satisfied customer—and he/she wouldn’t be a great technician.

There is wisdom in ignorance, in knowing what you don’t know. Our profession needs more of that kind of wisdom nowadays.

Made with ❤
with Elementor

© 2019 Paul Minot MD
All Rights Reserved