Letters to the Editor

Letters: March/April 2026

Readers write back about AI's place in higher education, Lex Hixon '63, and more.

We welcome readers’ letters, which should be emailed to yam@yale.edu or mailed to Letters Editor, PO Box 1905, New Haven, CT 06509-1905. Due to the volume of correspondence, we are unable to respond to or publish all mail received. Letters accepted for publication are subject to editing. Priority is given to letters of fewer than 300 words.

Coming to terms with AI

Clay Shirky’s article (“I, Chatbot,” January/February) addresses urgent questions about AI deployment. We need conversations about responsible supervision, including Shirky’s call to faculty to expand their AI knowledge to guide healthy student learning.

My Yale liberal arts education, including a studio art major, has been invaluable to my practice as a psychiatrist. Learning to observe in three dimensions and align multiple perspectives has profoundly informed my approach to complex cases. That some students now claim they cannot work without AI is chilling. Critical thinking skills should not be off-loaded to language prediction models.

As alarming as this is for undergraduates, children, teens, and those with mental health issues face graver risks. At least four teen suicides and five adult deaths have been attributed to unhealthy interactions with AI chatbots designed for maximizing engagement over safety. Some cases are now in litigation.

The scientific community hasn’t always been helpful. Some studies claim that AI chatbots can outperform human healthcare professionals in empathy. If empathy means understanding and sharing feelings, then AI—incapable of either—cannot possess empathy at all, let alone outperform. AI may produce responses rated as more empathetic, but careless wording erases this fundamental distinction.

I’m an enthusiastic but cautious Claude.AI user. In one exchange, Claude responded: “I don’t understand or share feelings. I process text and generate responses based on patterns. When I produce what might read as ‘empathic,’ I’m generating statistically likely linguistic responses to emotional content.”

These are language-prediction systems, without consciousness or consistent contextual understanding. As my cousin explained, “As knowledge gets cheaper, wisdom gets more expensive.” Yale must foster discernment, critical thinking, and healthy social, emotional, and intellectual development, both with and despite AI.
Matthew Romanelli ’83
Brooklyn, NY

Yale needs to embrace AI like it needs to embrace dioxin or the Marburg virus. AI is so corrosive to individual thinking, to the university’s mission, and to society, it should be handled like a toxic substance; students can study it but only with full protective gear or in a well-ventilated space. 
JB Goodman ’06
Evanston, IL

The “I, Chatbot” article was compelling, and it was what it was. I was hoping for something more about how AI is directly affecting teaching and learning.

An example. In my junior year, I got a very high grade—the highest one could get—for a paper I wrote comparing the book The Education of Henry Adams with All the King’s Men. It wasn’t an assigned topic; I just thought I saw glimmers of interesting connections. 

I suggest that anyone reading this now go to an AI engine and ask, “In what ways is the book The Education of Henry Adams similar to the book All the King’s Men? I think most will be stunned by the depth of the instantaneous answer. This isn’t a one-off—insert any two other books (or paintings, or forms of government, or whatever) and see what you get.

I graduated Phi Beta Kappa and taught at Yale. AI is superior to me in most of the ways that mattered when I was a student, or when I was evaluating students. Is my/their one-time “A+” work now just a baseline for teaching, learning, and evaluating? I’d love to hear more about this. Of course, I suppose I could just ask AI to tell me.
Jerry de Jaager ’67
Chicago, IL

I want to applaud a teaching example that President McInnis cited in her Q and A column  (“AI at Yale,” January/February). She told of a professor who “asked his students to use ChatGPT to create labels for various ceramic objects in the Yale Art Gallery. Then he asked them to critique these outputs, so they could see what AI misses.” 

This example will help guide my teaching. I have spent my professional life as a pediatrician in academic pediatrics, teaching medical students, residents, and fellows. What do I do with AI re: my teaching of doctors in training? Should I give AI a big “thumbs down”? (That would be inappropriate; there are many positive uses for AI in medicine.) Should I accept AI without criticism? (That’s not appropriate, either.) President McInnis has given me the answer: Faced with a clinical problem, student doctors should use AI to find an answer—but then they should use tried and true evidence-based medicine techniques to critique their AI answer. Did AI find the right clinical answer? Did it “hallucinate” an inappropriate medical course of action? Stay tuned! Thanks, President McInnis! 
Oscar Taube ’74
Washington, DC

I do not dispute President McInnis’s assertion that students and faculty should learn how to use AI, and do so responsibly, but when she is asked about “guardrails to make sure AI doesn’t do students’ learning for them,” the pedagogical example she praises disappoints. Students asking AI to come up with labels for objects in the University Art Gallery, and then critiquing those labels, might offer some sort of intellectual flex, but wouldn’t it be far more interesting—and generative—for students to come up with their own labels to compare with one another? 

I appreciated the counterpoint—intended or not—that followed in articles about our brain’s “mental flickering” as we navigate new terrain, and about the proliferation of bias-confirming pink-slime journalism created by AI (Findings, January/February). 

Finally, “I, Chatbot” made a helpful distinction between “output” and “insight.” As this article notes, AI “reduces the amount of thought required per unit of output,” precisely why it is useful in certain applications, but also why I am unenthusiastic about an assignment in which students contract out for work that their own large language model could do in the first place, instead engaging with the output of a machine, not the insight of their classmates. 
Susanna Schantz ’86
Clemson, SC

Hixon’s kindred spirit

In reading Kathryn Lofton’s article about Lex Hixon ’63 (“Good Karma,” January/February), I was immediately struck by similarities between his study of religion and that conducted by one of my favorite nineteenth-century figures. Captain Sir Richard Francis Burton was a British soldier, explorer, adventurer, widely read author and translator, diplomat, and major anthropological scholar, best remembered for his attempts (ultimately unsuccessful) to discover the source of the Nile in East Africa as well as his long-lasting dispute with the man who did make the discovery, John Henning Speke.

While in no way does Hixon’s career match Burton’s (few if any do), what renders the two men comparable is the extremely similar, highly sympathetic way in which each approached a lifelong study of religion. Both scholars adopted a remarkably pluralistic view, not only becoming thoroughly immersed in the study of multiple faiths, but, as part of this immersion, actually practicing various of them. As one example, when delving deeply into Islam, both Burton and Hixon undertook the Hajj, that once-in-a-lifetime pilgrimage to Mecca enjoined upon all Muslims. (In Burton’s case, the journey, much more hazardous in his day, gave rise to one of his most widely acclaimed works, The Personal Narrative of a Pilgrimage to Al-Medinah and Meccah.)

Fortunately for a scholar like Lofton writing about her subject’s work, Lex Hixon’s papers appear to have survived largely intact. They have not had to endure post-mortem culling by a strong-willed wife like Isabel Arundell Burton. Not long after Sir Richard’s death while serving as the British consul in Trieste in 1890, Isabel Burton, in a tragically misguided attempt to protect his reputation, consigned to the flames many of his papers, including notes, diaries, and unpublished works. 

If memory serves, in the concluding episode of the award-winning 1971 BBC series The Search for the Nile, the narrator, actor James Mason, condemned her bonfire as “the greatest literary crime of the nineteenth century.” As a lifelong historian, I would agree that whether or not Isabel’s action was the greatest such crime, it certainly ranks among them.
L. J. Andrew Villalon ’64, ’84PhD
Lakeway, TX

Naming names

The Light & Verity item in the January/February issue, “Graduate School Will Limit Enrollment,” contains a glaring omission: the word Trump. The “new financial headwinds” that have brought about the financial constraint that the article describes did not randomly swoop down from the sky. They are the result of Donald Trump’s effort to destroy the intellectual fabric of American society, specifically by attacking higher education and even more specifically by aiming at universities regarded as “elite.” Someone who had slumbered through the past year and came upon this article would not have a clue of the reality behind the current crisis.

I understand the desire not to twist the tiger’s tail. But there’s no avoiding the truth, and cowardice has never been a successful strategy for defeating—or even surviving—a bully.
Linda Greenhouse ’78MSL
Stockbridge, MA

It happened at Newnham

A small item (Campus Clips, November/December) notes that the new movie After the Hunt begins with the legend “It happened at Yale” when in fact the movie was shot in England. This is true, and Yale readers may be interested to know that, more precisely, the film was shot at Newnham College, Cambridge, the educational establishment I went to when I graduated from Yale in 1978. When I arrived at Newnham to continue my study of English, culminating in a PhD in 1984, I felt an immediate sense of belonging. I now wonder if it was in part inspired by the likeness in architecture. It’s nice to feel these two institutions have come together in this recent film, which I look forward to viewing!
Karin Horowitz ’78
Cockayne Hatley, UK

Human error

I was surprised to see the noun “swath” (a strip or section) misused on the January/February cover in place of the verb “swathe” (to wrap). Was AI consulted when writing the title?
Rachel Anderson ’06PhD
Eugene, OR

AI chatbots rarely commit errors in spelling and grammar, which makes them seem authoritative even when their facts are flat-out wrong. It takes a team of living, breathing human beings to misspell a word—as we did, on the cover of all places—and not catch it despite multiple readings. We’re properly embarrassed.—Eds.




Post a comment