Part 1 of Letters from the Department of Intellectual Craft
Part 1 of a three-part serialization of the short story “Letters from the Department of Intellectual Craft”
From the essay “Letters from the Department of Intellectual Craft,” which will appear as a chapter in the forthcoming volume Academic Cultures: Perspectives from the Future, co-edited by Michael M. Crow and William Dabars (Johns Hopkins University Press, 2026). Reproduced with the permission of the editors and the press.
Preamble • Part 1 • Part 2 • Part 3 • Postscript
Letters from the Department of Intellectual Craft, Part 1
President Davenport
Trentham University
August 2, 2100
Dear President Davenport,
I am outraged. OUTRAGED!
How am I supposed to carry out my responsibilities as Department Chair — responsibilities, may I remind you, that you conferred on me almost 15 years ago to the day — under such circumstances?
I am, of course, referring to my unceremonious relocation to an office that is precisely eighty-nine square feet and twenty-three square inches smaller than the one I had at the end of last semester.
And before you ask – I measured it.
As the Chair of the Department of Intellectual Craft, I feel it my responsibility to point out the immeasurable harms such a move will cause. We are, as you know, under siege by technologies that threaten the very essence of who we are as academics. And so, the loss of that eighty-nine square feet and twenty-three square inches matters. It sends a blaring signal out into the world proclaiming we no longer matter; that the bastions of academic culture — for so long the bedrock of civilization — are collapsing; that we have failed in our sacred task of preserving the very soul of humanity.
It also doesn’t help that my former office has been assigned to an upstart artificial “intelligence” that has the temerity to think it has what it takes to be a fully-fledged tenured professor.
Given the urgency of the situation, I look forward to a swift resolution at your earliest convenience.
Respectfully yours,
Professor Arthur Hale, PhD
Chair, Department of Intellectual Craft
President Davenport
Trentham University
August 24, 2100
Dear President Davenport,
Thank you for response to my note of August 2, which eventually arrived today. I understand that being President of a top university is a time-consuming job — even given your extensive reliance on “intelligent machines,” to which you seem determined to hand over all manner of responsibilities. Yet I would hope that my academic standing within this institution would afford me at least some priority.
I was, I must confess, surprised by your statement that it may be time for me to move on from, I quote, my “somewhat myopic perspective on the world we live in.” I’m guessing you don’t remember first-hand how much these so-called intelligent machines dominated our lives before the Great AI Reset of 2035, and how deeply they threatened the very essence of who we are.
I do.
I was ten years old when the Great Reset occurred. Old enough to remember what it was like living in a world where so much of who we were had been handed over to machines. And definitely old enough to be scarred by the collective trauma that swept the world when the whole AI edifice collapsed. It was that pivotal experience that led to me ultimately studying the nature of human scholarship and how it defines us — and to founding the discipline of Intellectual Craft and eventually the department which I now chair.
For the past forty years this has been an area of scholarship that has critiqued and resisted the development of machines that make a mockery of us. And it’s one that’s laid deep foundations for understanding the importance of what it means to be human — not a mere collection of processors and parts that simply ape what we do, but a highly evolved species that aspires to be, as someone once put it, the consciousness of the universe.
And so, while you may question the relevance of eighty-nine square feet and twenty-three square inches, they are a symbol of who we are, and what we are not — and what we are most definitely not is that mechanical monstrosity you’ve installed in my former office.
Of course, I understand your appeal to me to rise above my prejudices — I would expect nothing less from someone who’s embraced this new era of artificial everything with such gusto. But your suggestion that I take some time to get to know this … machine — I find the word “colleague” hard to stomach — is surely a joke. How could someone who has studied the science and art of the human intellect so deeply and diligently justify such a move?
Especially when it would mean visiting it in an office which is, by rights, mine.
Yours,
Professor Arthur Hale PhD
Chair, Department of Intellectual Craft
President Davenport
Trentham University
October 21, 2100
Dear President Davenport,
It strikes me that I may have responded with a little too much haste to your previous communication — and this may explain both the lack of a response to my note of August 24, and the rather frosty reception you gave me at last week’s fundraiser.
In my defense though, I would like to flesh out some of the history behind my perhaps-intemperate words in the hope of fostering a degree of understanding as I grapple with the present situation. This, is, after all, the year I hope to step down from my current position and enter the hallowed halls of “emeritus” status, and I would prefer to leave on a positive note.
(And yes, the irony is not lost on me that the health breakthroughs that have allowed me to remain active to the ripe old age of 75 are due, in part, to how thinking machines have accelerated research and discovery over the past 30 years.)
To put things into context, I need to go back to the year I was born — 2025, or November 21, 2025, to be precise. This was — as I learned later from my parents and others — a time of incredible growth and acceleration for artificial intelligence. A series of quite startling breakthroughs had taken the field from the obsession of a handful of dedicated researchers a few decades previously, to what some heralded (rather hubristically in my opinion) as the dawn of a new age. Machines began to emerge that appeared preternaturally human in their ability to converse, think, create, problem-solve, and dip at will into the vast intellectual reserves of all human history. The advocates of these so-called intelligent machines believed we were creating gods — gods in our own image that would cure disease, end poverty, fix the interminable petty politics of human existence, and even take us to the stars and beyond.
Yet like every previous case of God-like aspirations throughout human history, things did not end well.
I was largely unaware of these grandiose visions of artificial intelligence as a child growing up. My experiences of the “golden age of AI” were far more mundane. From as early as I can remember, my life was molded and crafted by a multitude of AI apps. They entertained me, they taught me, they provided companionship. And when times got tough (I didn’t have an easy childhood), they comforted me.
And then 2035 happened. I’m sure you know the history of the Great AI Reset well. Almost overnight, it seemed, the precarious edifice we’d built, and increasingly placed our collective hopes in, collapsed. I’m told by colleagues who study complex social and technological systems that, with hindsight, this shouldn’t have come as a surprise. It’s been extensively documented how a seemingly insignificant chain of bad decisions by AI agents cascaded into global systemic failure, fanning the flames of growing social discontent over what protesters colloquially referred to as our “AI Overlords.”
Almost overnight the artificial intelligence dream evaporated. Supply chains that depended exclusively on machines failed. Governments discovered that, without AI, they could no longer govern. The internet stuttered and died. And people the world over rediscovered the meaning of the word “essential” when applied to services we’d long taken for granted.
And myself? I lost everything that ever mattered to me.
Like many, I felt betrayed by the machines that I relied on. My mentors, my tutors, my minders, my companions, my friends.
I was bereft. And I was only ten years old.
It took the world more than a decade to recover and to start rebuilding AI-based technologies with the humility and humanity we should have embraced from the get-go. But for me, the damage was done. I knew, from that early age, I had to seek out a path forward where being human actually counted for something.
I started in small ways — searching out printed books, seeing how long I could go without electronic devices, learning to use my own hands and intelligence to make the things I needed. I guess it became a bit of an obsession to see what I could do on my own, without the help of machines.
Like many, I never intended to end up where I eventually did. But through a rather serendipitous string of experiences, I found myself falling in love with being a scholar and the quest to understand what it means to be human. And I eventually ended up here as the founder and Chair of this Department of Intellectual Craft — albeit without the actual chair, which is still (just in case you’ve forgotten) in my old office.
Despite the gradual re-emergence of AI following the Great Reset, I never succumbed to its temptations. And I never forgot or forgave the trauma that my ten-year-old self suffered as my de facto AI parents were ripped from me.
And so, intemperate as I may have been in my last note, it’s perhaps not surprising that I reacted as strongly as I did to your suggestion that I “move on.”
Sincerely,
Professor Arthur Hale PhD
Chair, Department of Intellectual Craft
President Davenport
Trentham University
November 9, 2100
Dear President Davenport,
My sincere apologies for responding so tardily to your note of October 24. I must confess that I was quite moved and humbled by your comments, and I struggled for some time to know how to respond appropriately.
It’s funny that you should bring up how the relationship between society and AI has evolved since the Great Reset. Having dug further than I intended into my own childhood in my previous note, it got me thinking afresh about the past sixty-five years, and how they’ve led to where we are now. And while I still stand firm by my assertion that we need to stay true to the unique value of human intellectual craft, this has got me wondering if I’ve made that most clichéd error of the academic scholar and become so entrenched in my own ideas that I’m now a prisoner to them.
Looking back, it may surprise you to know that, despite my antithesis toward artificial intelligence, I was a student of its gradual re-emergence in the decades after 2035. Some of my early papers even explore new approaches to developing the technology in socially responsible ways.
By 2043 — the year I started my undergraduate degree — there was renewed interest in taking a more measured approach to AI. Looking back, it seems amazing now that I even had the chance to attend a university. At the height of the pre-2035 AI era there were movements to dismantle higher education. With the promise of free intelligence for all, degrees began to look increasingly irrelevant. As more people turned to AI agents for their education, aided and abetted by AI-powered credentialing services, enrollment dropped precipitously. And as it did, universities found themselves facing a crisis of purpose — and of funding.
This was, of course, not helped by a surge of new companies using AI for research and discovery that were leaving university research departments in the dust. Looking back, if it wasn’t for the Great AI Reset, it’s doubtful if the very concept of the university — and even scholarship perhaps — would have survived much beyond the 2040’s.
But 2035 changed all of this. As trust in intelligent machines plummeted, universities thrived. Previously on the brink of extinction, there was renewed interest in what they offered to society. Research conceived and carried out by humans became fashionable again, as did classes that were taught by real people, and degrees that went back to human-centered basics.
And a new movement around “slow scholarship” was born: a movement that embraced the humanity of our intellect, and the value of the intellectual journey — not just the destination — as a reflection of who we are.
My degree was an evolved version of what would once have been called a liberal arts degree. Through it, I learned the value of reason, of human intellect, and of serendipitous discovery. And I became enamored with the whole idea of slow scholarship. But as I began to explore and embrace the craft of being a scholar, I discovered that, to me, this only had meaning in the face of the growing threat presented by artificial intelligence.
It seems ironic looking back that, as I sought to define what it means to be a human scholar and intellectual, AI was the catalyst that spurred me on.
In 2047 I started my PhD. My research focused on the idea that there is something intangible yet essential in the biological, social and philosophical origins of humanity that defines our intellect. And I argued that the value of human scholarship is, as a result, not reproducible in non-biological entities. To pursue this though, I had to study those very machines that so offended me.
In the spirit of slow scholarship, I pursued my PhD at a rather sedate pace, defending it at the end of 2053. And over the next decade or so I became something of an expert in post-2035 AI. This was an era seasoned with caution. Gone was the spirit of “moving fast and breaking things” that led to the Great AI Reset. Instead, every step toward a future that might contain thinking machines — whether these went under the banner of augmented intelligence, artificial intelligence, synthetic agents, or half a dozen other labels — was slow, reflexive, and constantly tempered by concerns around avoiding unintended consequences.
Looking back, this era of post-2035 AI development was, in effect, a form of slow scholarship in itself — albeit one based on the assumption that the future would include intelligent machines. Funny, I hadn’t realized this before.
Yet that assumption of AI being a part of our collective future was ever-present — it’s as if our visions of the future simply could not conceive of a world without artificial intelligence. This is what my work as an academic increasingly began to push back against. And it’s what led to the founding of a new discipline that had its roots in thinking that’s nearly a century and a half old.
But I ramble. In writing I mainly wanted to acknowledge your kind response to my previous note, despite my frustrations. It’s not everyone who can see through the outer crustiness of an old academic and see the rather more vulnerable individual inside.
Thank you.
Arthur Hale
Read the complete essay:
Preamble • Part 1 • Part 2 • Part 3 • Postscript




Wow! Nicely done. I actually like Hale and care about what happens. And i can almost see/hear(?) you going through similar revelations about our relationship to AI as you were writing it.