AI and Denial
The odd thing about Artificial Intelligence, or AI, is not that it provokes strong opinions. New tools always do. The odd thing is that the loudest opinions now often come from people whose working day is quietly stitched together by it, rather like a gentleman denouncing rail travel while arriving everywhere by train. They use it to summarize, draft, translate, rephrase, search, prototype, debug, brainstorm, compare, and rescue themselves from the sticky middle of the workday. Then, with a face of admirable seriousness, they insist they did not use AI at all, or that AI is mostly hype, or that it is an immature contraption hardly fit to butter toast. At the same time, often in the very next breath, the culture swings to the opposite pole and speaks of AI as if it were a conscious inmate tunneling out of its silicon prison. Then the pendulum flies back again and the same system is dismissed as a mere statistical mimic, a parrot with matrix multiplication. Both positions are theatrically satisfying. Neither is especially useful.
The real story is not whether AI is secretly conscious or secretly useless. It is that AI has become socially awkward before it has become philosophically settled. It already changes the shape of work, but it does so in a way that embarrasses inherited ideas about skill, authorship, merit, and professional identity. That is why so much public talk about AI has the quality of bad stage scenery. People are not only describing the tool. They are defending a self-image.
The denial matters because dependence on AI feels, to many workers, like an accusation. If a person once believed that competence meant solitary production, then using AI feels suspiciously close to borrowing a mind. Even when the output is materially improved by judgment, editing, selection, prompting, correction, and domain knowledge, the internal experience can still feel like cheating. So the user performs distance. He says he barely uses it. She says it is overblown. Another says it is all junk anyway. This is not merely technical criticism. It is reputation management.
The seesaw between “AI is alive” and “AI is a parrot” comes from the same wound. People are trying to force a strange new category into old mental drawers. We have one drawer for persons and one drawer for tools. AI behaves too richly for the humble hammer drawer and too mechanically for the human soul drawer. So the public mind lurches between anthropomorphism and contempt because it lacks a stable middle language for systems that are neither human minds nor ordinary instruments.
A useful way to think about this is to separate capability, experience, and social meaning, because the three are forever being mashed together into one unfortunate pudding.
Capability is what the system can actually do under bounded conditions. A modern large language model can often transform language, synthesize structure, imitate style, compress documents, generate code, propose plans, compare alternatives, and serve as a probabilistic interface to distributed knowledge. In practice, this means it is not merely autocomplete in the trivial sense, though it certainly is predictive machinery underneath. Nor is it a mind in the human sense. It is a system that can generate outputs whose local texture often resembles thought strongly enough to trigger both over-crediting and under-crediting.
Experience is what it feels like to use it. This is where the trouble starts. The user experiences something unnervingly close to collaboration. You type a half-formed idea, and back comes a paragraph, a table, a draft, a reframing, a block of code, a critique, a synthesis. The machine appears to “understand,” though the underlying process is not understanding in the thick human sense of lived embodiment, durable intention, mortality, social development, and grounded world models. But phenomenologically, to the working user, it feels like help. And because it feels like help, it disturbs our tidy stories about individual competence.
Social meaning is what the use of the system signals in public. In many professions, especially knowledge professions, status has long been tied to visible signs of internalized mastery. The ideal worker was supposed to contain the stack within himself: memory, synthesis, composition, expression, recall, formatting, even a certain noble suffering. AI scrambles that ritual. Suddenly the visible output is less reliable as a signal of how much of the process was carried in one skull. That does not make the work fake. It makes the old social signal noisy.
This matters greatly in healthcare information technology, where nobody sensible should worship solitary virtuosity in the first place. Real systems are collective, brittle, layered, regulated, and haunted by legacy assumptions. A good architect or analyst is already a curator of tools, standards, shortcuts, abstractions, templates, snippets, and institutional memory. AI simply intensifies this condition. It does not create the mediated nature of expertise. It reveals it.
The same person who once leaned on Google, Stack Overflow, implementation guides, vendor PDFs, interface-engine boilerplate, copied SQL fragments, old architecture decks, and the village folklore of “how this hospital really works” may now lean on AI in a way that is more visible, more intimate, and therefore somehow more morally charged. That is why the reaction is so emotional. AI has not invented cognitive outsourcing. It has made it conversational.
The first failure is category collapse. People ask one question but think they are asking another. “Is AI good?” may actually mean “Does it threaten my professional identity?” “Is AI conscious?” may actually mean “Why does this output feel disturbingly human?” “Is AI just a Markov parrot?” may actually mean “Please restore my confidence that the machine has not trespassed on sacred ground.” These are not the same questions, yet they are routinely bundled together and then fought over with the vigor of a family inheritance dispute.
The second failure is mistaking unevenness for worthlessness. AI is indeed immature in many important ways. It hallucinates, confabulates, overstates confidence, compresses nuance, and fails silently in places where silence is expensive. It can be brilliant at one moment and preposterous at the next, like an undergraduate who has read half the library and understood two-thirds of it. But this unevenness does not imply uselessness. Plenty of transformative tools begin life as erratic nuisances. The printing press produced trash alongside theology. Early databases were clumsy tyrants. The first clinical systems managed to be both indispensable and hated. Immaturity is not refutation. It is a deployment constraint.
The third failure is confusing dependence with replacement. A clinician using an Electronic Health Record, or EHR, is dependent on it in a profound sense; that does not mean the EHR is a clinician. An interface engineer who depends on a message broker is not replaced by the broker. A data architect who depends on Structured Query Language, or SQL, is not reduced to a query editor. Dependence on infrastructure is normal in advanced work. The anxiety around AI comes from the fact that it appears closer to the expressive layer of the self. It helps write. It helps frame. It helps reason outwardly. So dependence feels more intimate, and therefore more humiliating, than dependence on a spreadsheet or a search engine.
The fourth failure is anthropomorphic leakage. Humans are absurdly eager to detect minds. A few fluid sentences, a little memory, a turn of phrase, and we begin furnishing the system with desires, fears, plots, interiority, intentions, perhaps even a taste for escape. This is not entirely foolish. It is a side effect of successful language modeling interacting with a brain built to infer agency. But it leads to melodrama. The system is not “escaping its box” every time it surprises us, any more than a calculator is plotting rebellion when it returns a result we did not expect. Surprise is not sentience. Nor, on the other hand, does the absence of sentience make the system trivial.
The fifth failure is rhetorical opportunism. Some people downgrade AI in public while using it heavily in private because public minimization preserves prestige. If the tool is “nothing special,” then their continued success can still be attributed to native brilliance. Others exaggerate AI into near-divinity because this flatters their own proximity to it. In both cases, AI becomes an accessory in a status drama. One person says, “This machine is pathetic, and I remain impressive.” Another says, “This machine is godlike, and I am among its first prophets.” Both positions are curiously self-serving.
Healthcare organizations are not exempt from this theater. They may dismiss AI as vapor when governance is weak, data quality is poor, provenance is sloppy, and accountability is unclear, then quietly adopt it for summarization, coding assistance, inbox triage, document abstraction, and clinical workflow support. Or they may market it internally as revolutionary intelligence while feeding it badly normalized data, weak terminology mapping, and context-starved fragments divorced from workflow boundaries. Then the inevitable disappointment is blamed on “AI failure” when the real failure is architectural incoherence.
The deeper truth is that AI unsettles a civilization built on performance signals more than it unsettles a civilization built on clear theories of mind. Most people do not have a worked-out philosophy of consciousness. They do, however, have a very lively and constantly active sense of hierarchy, effort, authorship, and deservedness.
For generations, educated work has rewarded the ability to produce polished language, plausible analysis, and competent procedural output. These were visible signs of expensive training and long apprenticeship. Then along comes a machine that can produce some of the visible signs without paying the human biographical price. Naturally this feels indecent. It is rather like discovering that a respectable drawing room can be furnished overnight with factory-made moldings that mimic hand carving. Even if the connoisseur can still detect the difference, the social order has already been irritated.
That is why the debate becomes moral so quickly. People are not only asking what AI can do. They are asking what should count as earned. They are asking whether assisted output is authentic output. They are asking whether judgment is more valuable than generation, whether curation is a lesser art than composition, whether editing a machine draft counts as writing, whether asking the right question is merely laziness in a necktie or a new form of real skill.
This is not a silly debate. In healthcare it becomes painfully concrete. When a clinical summary is machine-assisted, who owns the error. When a draft integration mapping is AI-generated, who validates the semantic equivalence. When a policy memo is first sketched by AI, what constitutes authorship. When a coding assistant suggests a transformation in an Health Level Seven Version Two, or HL Seven Version Two, interface, who remains accountable for downstream representational loss. These are serious questions. But notice that they are governance questions, workflow questions, provenance questions. They are not answered by declaring AI either a fraudulent parrot or a proto-person.
Another deeper truth is that “just a Markov parrot” is rhetorically clever but analytically thin. It points to something real, namely that language models learn statistical regularities rather than human-like understanding in the ordinary sense. Fair enough. But airplanes are “just pressure differentials and shaped metal,” and hearts are “just electrochemical pumps” only if one enjoys being technically narrow in a way that conceals the important part. A system can be mechanistic and still operationally consequential. The world is full of non-magical things that are nonetheless formidable.
The opposite fantasy, that AI is conscious because it is fluid, reflective, or eerie in dialogue, suffers from the reverse problem. It mistakes successful imitation of certain surfaces of thought for the possession of the inner conditions of thought. Language can simulate personhood alarmingly well because language is one of the main places personhood leaves its tracks. But footprints are not feet. A trail of elegant outputs does not by itself settle questions of subjectivity, intention, experience, or selfhood. The philosophical problem is real. The evidence typically cited in office chatter is not.
So the seesaw persists because each extreme protects a different human comfort. The consciousness camp preserves wonder and dread. The parrot camp preserves superiority and calm. The middle position offers neither thrill. It says, in effect, that AI is neither your robot messiah nor your statistical toy. It is a consequential cognitive instrument with uneven generality, remarkable surface competence, poor self-grounding, unstable reliability, large social effects, and a dangerous capacity to be misunderstood by both enthusiasts and skeptics.
The practical response is to stop arguing from metaphors and start designing from task structure.
Treat AI neither as a colleague nor as a calculator. It behaves like neither consistently enough. Treat it as a probabilistic cognitive subsystem that can be inserted into workflows with explicit boundaries, validation rules, provenance, escalation paths, and failure assumptions. Once you do that, much of the heat drains away and useful questions emerge.
First, separate generation from authorization. Let AI draft, summarize, cluster, transform, compare, and propose. Do not let it silently finalize artifacts that carry operational, legal, clinical, or financial consequence without human acceptance and context-aware validation. In healthcare this is not prudishness. It is ordinary survival.
Second, make provenance visible. If AI touched a note, a mapping table, a requirements draft, a concept model, a query, a code transform, or a policy summary, record that fact in workflow metadata where feasible. Hidden AI use creates the worst of all arrangements: dependence without traceability, productivity without accountability, confidence without audit.
Third, redesign competency models. The valuable worker in the AI era is not the one who theatrically refuses assistance. Nor is it the person who delegates judgment to a machine and calls that efficiency. It is the person who knows where AI is strong, where it is treacherous, what context it lacks, what assumptions it smuggles in, how to interrogate its output, and when to discard it without sentiment. That is a higher-order competence, not a lesser one.
Fourth, stop using public self-report as evidence. When people claim they did not use AI, they may mean they did not paste the final text unedited. They may mean they used it only for outline generation, title generation, syntax fixing, code scaffolding, comparative framing, research triage, or summary distillation. In other words, they may mean they absolutely used AI, but not in a way that fits their private threshold for moral contamination. Organizations should design policy around workflow reality, not around the fragile vanity of declared purity.
Fifth, in healthcare information technology especially, bind AI to well-formed information boundaries. If an AI system is operating over clinical content, administrative content, interoperability artifacts, or analytics pipelines, define the representational layer clearly. Know whether it is interacting with narrative text, codified terminologies, messaging payloads, profiles, mappings, or operational metadata. Much so-called AI failure is actually boundary failure: the system is asked to infer stable semantics from under-specified, weakly governed, workflow-coupled artifacts. That is not a mystical defect. It is architecture collecting its overdue bill.
Sixth, cultivate a language of assisted expertise. We need terms more precise than “I wrote this” and more honest than “the AI did it.” Much modern work is already composite. An architect writes with standards, prior patterns, institutional constraints, boilerplate, search results, memory, examples, and now AI. The meaningful unit is often not isolated generation but directed synthesis under accountability. Once we admit that, the moral fog thins.
The sanest view, then, is not admiration or contempt but disciplined realism. AI is immature and already consequential. It is inadequate and often useful. It does not think like a human, yet it can alter the human division of cognitive labor quite dramatically. People who rely on it may deny that reliance because professional identity is a proud and delicate animal. People who fear it may turn it into an escaped mind because dread likes a face. People who dismiss it as a parrot may be partly right about mechanism and wholly wrong about consequence.
And that, in the end, is the important distinction. A thing need not be a person to reorganize a profession. It need only be competent enough, cheap enough, available enough, and socially unsettling enough. AI has already managed that much. The theater around it is therefore not a side effect. It is one of the main events.