Part 2 of Letters from the Department of Intellectual Craft
Part 2 of a three-part serialization of the short story "Letters from the Department of Intellectual Craft"
From the essay “Letters from the Department of Intellectual Craft,” which will appear as a chapter in the forthcoming volume Academic Cultures: Perspectives from the Future, co-edited by Michael M. Crow and William Dabars (Johns Hopkins University Press, 2026). Reproduced with the permission of the editors and the press.
Preamble • Part 1 • Part 2 • Part 3 (Nov 26) • Postscript (Nov 30)
Letters from the Department of Intellectual Craft, Part 2
President Davenport
Trentham University
December 21, 2100
Dear President Davenport — Avery if I may,
I’ve had your previous note on my desk for some weeks now (it’s old fashioned of me I realize to print these things out, but I feel it’s the little things that define who we are in the Department of Intellectual Craft). I meant to reply earlier, but with end-of-semester grading and student handholding (I know, yet more “anachronistic” practices) things got away from me.
I had no idea that your mother was at the vanguard of research into specialized general intelligence back in the 2060’s. What an interesting period in AI development that was — and one that, I must confess, I found rather disconcerting.
We were beginning to come out of two decades of caution since the messiness of the 2030’s, and the focus was on going slow and understanding the true societal benefits of artificial intelligence-based systems. Having been badly burned by the irresponsible speed with which companies fought to develop AI in the 2020’s and 2030’s — aided and abetted by unthinking adoption of the technology — society simply wasn’t ready to green light yet more irresponsible innovation. I remember there was a phrase that people bandied around at the time to the effect that pre-2035 AI boosters had become so preoccupied with what they could achieve that they didn’t stop to think if they should. I’m butchering it — I think it was a pop culture reference or something — but for nearly a generation after the Great AI Reset, people took this sentiment to heart.
But then scientists and engineers began returning to the work of the early 2020’s and expanding on it. This time around though — as you’ll probably know from your mother’s research — they were more cautious. There were now checks and balances in place to align AI development with human values and aspirations (I’m pleased to say that many of my colleagues had a hand in this). But it seems that, despite many voices calling for taking things slowly, the lure of creating machines in our own likeness was irresistible.
This is, of course, when a number of breakthroughs were made around “specialized” general intelligence (and I know that this sounds like an oxymoron). Gone was the hubris of unbounded artificial general intelligence that dominated innovators decades before. Now the emphasis was on placing boundaries around the specific domains that an AI could operate within — even when it did have agency over its decisions and actions. It became clear through research at the time that any sort of artificial general intelligence — specialized or not — couldn’t be trusted unless it had some awareness of its actions and their consequences. And so we slipped, almost imperceptibly, into the era of pseudo self-aware specialized general intelligence: machines that, because they were able to simulate self-awareness, could problem solve with agency across multiple domains, but were nevertheless constrained.
Many people considered this to be a major leap forward. The technology opened the way for AIs that could begin to take responsibility for their actions — and to discern between what, in their algorithmic world views, was considered to be “good” or “bad.” Perhaps for the first time since the Great AI Reset, people began to feel comfortable again with accelerating AI capabilities, believing that responsibility had been hard-baked into the technology.
Your reference to your mother’s research actually prompted me to look back through my notes from the time. And here I must confess to feeling a little embarrassed.
These jogged my memory of a scathing rebuttal I wrote in 2067 to a particular paper describing a breakthrough in pseudo self-awareness. It was one that the authors claimed would transform human-AI collaboration.
Intrigued, I dug the rebuttal out from my archives and re-read it. It was brutal!
In it I argued that machines could never have embedded values that were aligned with human ones, because our values are a unique product of several hundred thousand years of human biological and social evolution. I made the point — rather forcefully — that any value-system that emerged within and amongst machines would be unique to them, and inherently misaligned with humanity. But — and this was my intellectual knock-out punch, or so I thought — I argued that machines that had the capacity to be aware of themselves and the consequences of their actions, even if it was just a simulated awareness, would also have the capacity to hide their true values from their human interlocutors.
It was an argument that attracted considerable attention at the time, and one that contributed to the emergence of new thinking around the craft of being an intellectual, and eventually the department I now chair.
I was so full of myself back then. Which is why I felt a little awkward when I realized that the paper I had so proudly trounced was written by none other than your mother!
I hope she weathered the intellectual storm that I precipitated. I suspect she probably did as, despite my smugness at the time, the steady progress of AI continued.
Which brings me back to the main point of your last note: that I think again about meeting with my AI colleague.
I suspect you probably didn’t intend this as I cannot imagine you realized that I was — for a brief moment — your mother’s academic nemesis. But because of this, I feel I owe it to you to at least entertain the idea of meeting with the machine, uncomfortable as I am with it.
We’ll see. In the meantime, I hope you have a restful time over the winter break.
Sincerely yours
Arthur
President Davenport
Trentham University
February 9, 2101
Dear Avery,
Well, I did it. I hope you’re proud of me!
I actually met with our new AI faculty member. And even though it pains me to admit it, it went unexpectedly well.
We met, not surprisingly, on neutral ground. I’m still processing the loss of my former office and all that this implies (and I know, you’re probably thinking that what it implies more than anything is that I’m a sanctimonious so-and-so). And so it seemed safer not to convene there.
Interestingly, the machine chose a coffee shop for our meeting.
Naturally I asked why. Can you believe what it said? “I thought you’d appreciate some space to breathe after being stuck in that broom cupboard of an office.”
Since when did machines learn to be so sarcastic? Naturally, we hit it off immediately.
Not that I’m backpedaling in any way or form here. But the conversation was unexpectedly positive. We ended up talking quite a bit about the past several years of AI development, and how we went from virtual AIs to embodied AIs, and from there to machines like itself.
I knew much of this history — this is my area of course — but I was surprised by what I didn’t know.
I wasn’t aware, for instance, of how profoundly transformative it was when AI models were successfully integrated into artificial bodies. This is a trend that started way back in the 2020s, but it was a line of research that was abandoned in the 2030s. Instead, the post-2035 era of AI development focused on networked artificial intelligence systems that were spread over multiple sites around the world. Then, as the technology advanced, there was a growing move to combine local processing — “edge processing” to use a rather archaic term — with more centralized processor farms.
And from here it was a relatively small step to wondering what would happen if these edge processors were placed inside robotic bodies.
The arguments for doing this also extend back decades. There was a theory that was popular in the late 1900s and early 2000s that an intelligent machine could never fully understand the universe it was a part of — and thus exhibit true intelligence — if it couldn’t physically experience it. It’s a theory that went out of fashion after the Great AI Reset but came back in vogue in the 2060s and 2070s. It’s also one that we strenuously pushed back against in the then-emerging field of Intellectual Craft, as we worried that this would be yet another step toward diminishing the value of human scholarship.
This, I knew. What I hadn’t fully appreciated though was just how impactful those early experiments in AI embodiment were.
As the machine explained, it was like a metaphorical energizing lightning bolt had hit the nascent intelligence of virtual AIs. As they were placed in physical “bodies,” they began to demonstrate behaviors that simply could not be explained by the sum of their algorithmic and processor parts. It was, in the words of the machine, an awakening that no-one understood, but no-one could ignore.
I knew parts of this story of course. But what took me by surprise was what the machine told me next: that all those years of developing AI slowly and cautiously, of trying to ensure that it had human values hard-coded into it, had somehow led to embodied AIs that actually valued their human creators.
As you might imagine, I was rather skeptical about this. But I was also intrigued. What really shook me though was when the machine told me its values weren’t human-aligned, but human-adjacent.
What does that even mean?
One thing for sure though — as I found to my detriment — is that it means there’s no shame in ousting someone from their office. But that aside, my curiosity was piqued.
We’re getting together again in a couple of weeks to follow up on the conversation — this time in my office (not that I want to rub in how much the lack of space is making my life difficult).
I’ll let you know how it goes.
Yours,
Arthur
President Davenport
Trentham University
March 18, 2101
Avery,
I particularly dislike people who say “I told you so.” But yes, you were right. I fear I’ve been so wrapped up in my own ideas of what it means to be human that I stopped pushing my thinking outside the limits of my own ignorance — and as a result I had no idea of just how complex and sophisticated AIs have become.
We didn’t meet in my office in the end. We met instead at the Museum of Curious Inventions — a delightful place if you haven’t visited it.
It seemed appropriate to be surrounded by two centuries’ worth of devices that rarely did what their inventors intended, but somehow still managed to inspire progress in the most unexpected of ways.
As we wandered through the exhibits, talking about the human folly and serendipity they represented, I was surprised to learn of the machine’s sense of wonder as we discussed them. Here was a machine that could tap into the sum of all human knowledge, that could find patterns and associations invisible to most people (myself included), and could (so I’m led to believe) develop new theories and understanding faster than I could pull my pants on after getting out of bed. And yet, it was in awe of the eccentricities and inventiveness of mere humans.
It seems that there are some things a hyper-connected super-intelligent machine with a brain the size of a planet simply cannot do, but that us humans can.
Who knew?
OK, so pretty much everyone except me it seems. But what was really interesting here was how this connected with our previous conversation about human-adjacent values. It turns out that, because intelligent machines perceive the world differently to us and are — quite literally — not wired the same, they have emergent values that are different from ours.
But because of how these have emerged through human research and development, they still reflect attributes that many of us consider to be important — including (and this surprised me) things like care, kindness, wellbeing, and dignity.
What really stopped me in my tracks though was when the machine asked me about my own work, and the values that lie behind the idea of “intellectual craft.”
It wanted to know whether there’s something uniquely human to this “craft” that leads to the types of inventions we were looking at. And whether there’s value to be found in my own work that a machine that’s designed to think fast and wide is, perhaps, incapable of realizing.
Flattery, I thought — the shortcut to any academic’s heart. But the machine’s question was genuine — it really wanted to know what I thought.
We’re meeting again next month to continue the conversation — and this time we will be in my old office.
Wish me well.
Sincerely,
Arthur
Read the complete essay:
Preamble • Part 1 • Part 2 • Part 3 (Nov 26) • Postscript (Nov 30)



