I recently took on a part-time role as a pre-medical counselor for undergraduate neuroscience majors, so I perused a couple mock MCAT exams. In all candor, I would be lucky if I could have answered 20% of the questions correctly. Granted, I am forty-plus years separated from the crap I had to digest to take my own version of the test, but here lies the rub: one would think that if the exam in any way correlated with the cognitive performance of a practicing physician, I should ace the damned thing. I mean, I have been by all measures a highly successful practitioner (Professor, Department Head, and Residency Program Director) in one of the marquis specialties (Neurosurgery). I have stayed up on, and contributed to, the science of my specialty. I performed over 13,000 procedures and cared for tens of thousands of patients in my career. But there is no evidence—that I am aware of—that has demonstrated a definitive correlation between MCAT scores and “success” as a physician. There does seem to be a correlation with success on other standardized exams but that should be no surprise, some people are simply good at the darned things. Others, on the other hand, who may be cognitive giants, are not. Thus, instead of predicting who will become the superior practitioners of the future, the exam seems to neatly fit into the overall contrivance between the medical education establishment and the undergraduate educational world. That is, that although much lip service is given to selecting students of significant cognitive diversity, those skilled and committed to the memorization of mountains of minutia appear to be the most desired.
The cynical side of me would even suspect that such a contrivance is a nod to job security for the legions of University biomedical research professors out there, for the physical biochemistry of bacteriophages and the dynamics of rat mitochondrial DNA matters little to the practicing physician. To the practicing physician, it is the real-time assimilation, digestion, sorting, filtering, processing, critical evaluation, assessment, rapid evidence-based corroboration, and repeated reassessment of barrages of disparate and unstable variables that separates the wheat from the chafe. And it’s the ability to draw conclusions, and act upon those conclusions; and to assess, dispassionately, the accuracy of said conclusions and the success of related intercessions. All in a maelstrom of external pressures, extenuating circumstances, twists and turns, missing components, purposefully misleading data, and human psychology.
I will confess that I don’t know what the best measure is for predicting the best clinicians of the future—Heck the whole playing field will undoubtably be upended by artificial intelligence—but I am sure it is not success in Organic Chemistry or on the MCATs. In their defense, thpough, Orgo and the MCATS do tell us who are socially disconnected enough, uni-focused enough, and probably narcissistic enough to memorize vast quantities of essentially meaningless data points. Admittedly, it is a useful skill set in the first couple of years of medical education—but memorization with a distinction. Much of what if poured into the brains of medical students might actually be of at least minimal use at some point in a medical career (although many would contend otherwise). The stuff of organic and the MCATS could be lost to virtually every practicing physician on earth and the health of the populace wouldn’t skip a beat.
After decades of selecting out the most compulsive memorizers from each wave of supremely overanxious pre-meds, we have reaped what we have sewn: armies of highly intelligent, information-sodden, algorithm-addicts who often lack even the bear rudiments of critical thinking skills. That is, we have created armies of practitioners who embrace the data immediately at hand but fail to challenge it with the scientific skepticism and, indeed, reality; human-computer caregivers often unable to see the forest for the trees. Wouldn’t it be nice for at least some to be of capable of shutting off their onboard computers and taking on the Death Star using “The Force.”
Let me give an example. One fine evening, we were called down to the E.R. to see a woman who had lost all motor function in her right leg. We were told the patient had severe lumbar stenosis on MRI and needed immediate neurosurgical adjudication. We were finishing up an emergent craniotomy and said that we would be down as soon as possible. The E.R. doc became irate because he had done the work of making the diagnosis and now were “dragging our heals” taking care of the problem. We suggested he take another look at the patient, the neurologic exam just didn’t sound right for the purported condition. Several further angry calls later, we ran down to check things out.
We walked into the room and greeted the patient. I asked a little history as I put my hand on the affected leg preparing to assess her motor function. Her leg was cold. I uncovered it further, It was pale blue and mottled. It was also lifeless—she couldn’t move it at all. I checked for pulses. There were none. She had suffered a femoral artery occlusion—she was “stroking” her entire leg. The appropriate diagnosis was soon confirmed by angiography. The E.R. doc had made an assumption about the patient’s neurological status without an in-depth examination and had gone down the wrong algorithmic path. He had ordered an MRI chasing a red herring. Wrong test ordered—wrong information coming back.
Through the years we have collected dozens of such patently obvious misdiagnoses. Now, this is not a polemic against E.R. docs—they have tough jobs. And goodness knows, some of the radical misdiagnoses have come from my own team, and embarrassingly myself. But the point here, is that there is a lot of data flying about out there in modern medicine, and much of it is misleading or spurious. Being a real clinician means sorting through it all and constantly challenging one’s own assumptions and the assumptions of others.
It is absolutely unnerving encountering the legions of physicians who smugly strut the stage of the modern day medical center full of self confidence in their assessments and decision making, blind to their own errors. And I think it relates back to the memorization fests of their undergraduate days. They became used to being the masters of their own domain. They probably continued to be so right through the first 2-3 years of medical school and the associated board examinations. Remember, they were the undisputed champions of unrelated minutia. And, as practitioners, they go on to become repositories of biomedical research within their specialties, zipping off the ten latest journal abstracts on a given subject—including, annoyingly, the institution of origin and the names of the lead authors. Only, they are blind to the fact that the grand majority of the literature they are quoting is predicated on bad science, is contrived (if not outright fallacious), or is not applicable to the patient problem they are currently dealing with. The net result is too many physicians out there who have no idea what they have no idea about or, more simply put, don’t know what they don’t know.
And, I believe this is a direct result of an entrenched faulty selection process predicated on unproven and frankly fatuous measures such as Organic Chemistry grades and the MCATS. I wonder if so many of the problems that plague modern healthcare delivery can be tied to dysfunctional selection or selection of the dysfunctional. I am inherently suspicious of one who can lock himself or herself away in a library and pound nonsense equations into their heads day after day, year after year, whilst their classmates find the time to grow and mature, and problem-create and problem-solve, in their collegiate days.
But wait, you say, aren’t their resumes packed with growth experiences: rescue squad work, charity drives, chess clubs, mitochondrial DNA research, the construction of a water treatment facilities in East Africa? Of course they are. Because they are the fatuous pre-requisites to even considering applying to medical school anymore. Activities that demonstrate their commitment or empathy or awareness or something. So, they dutifully slog through the experiences, just as they slog through their physics equations—because they have to, not because they want to.
The development of critical thinking takes interaction, and challenge, and experience, and failure, and collective consideration, and bargaining, and loads of skepticism. I’m not sure these qualities can be instilled in a library. They’re much more likely obtainable out on a playing field of some kind. Certainly we found, at least in selecting neurosurgical residents, that previous experiences in a team sport, theatre group, or band ensemble was of incalculable value. Self-sacrifice, cooperation, collaboration, group problem solving, adaptability, adaptive intelligence (street smarts) were all the purview of such candidates.
Would an army of true critical thinkers save us from our current woes. I certainly believe we would save a lot of time and money spent on spinning our wheels and chasing every spurious medical lead. I would have to believe the creativity level would skyrocket (it would be hard to be lower). Integration and collaboration between specialties, acceptance of other professionals into the pack, respect for diversity of opinion would all be facilitated.
I also would have to think we would easily find a far less mercenary prototype to stock our practitioner shelves. We might find people in whom the ends don’t fully justify the means. Those who wish to give of themselves and reap the benefits of patient gratitude, grace, and progress rather than supreme financial gain and social status. Perhaps we would be far more focused on overall public health and equality and prevention rather than seeing medicine as some grand business scheme.
How do we find these diamonds in the rough? I don’t know. I do know that our current, deeply entrenched but badly broken system, is failing us in many ways. I would suggest that it’s time to stop congratulating ourselves on our selections and explore different paradigms, try different measures, throw out a broader net.
Leave a Reply