Resurrecting deceased darlings: The Missing Foreword to "AI and the Art of Being Human"
Often, less is more when writing a book. But occasionally what is cut is still worth reading—including the former foreword to "AI and the Art of Being Human."
I’m sure my grad students get tired of me advising them to “kill their darlings”1 as they work on their papers and dissertations. But it’s mantra that’s stood me in good stead through my writing career, including in the just-published book AI and the Art of Being Human written with Jeff Abbott.
Up until a few days before we finalized the manuscript, AI and the Art of Being Human included a foreword written by Jeff and myself—more of an author’s note really—that provided insights into why (and how) we wrote the book, and why we felt it was both timely and necessary.
At the last minute though—and after some sound professional advice—we realized that, much as we loved it, the foreword made it harder for readers to engage with the book and get into its flow.
And so we ditched it.
The result is a much tighter and cleaner start to the book—killing our darling was hard, but it made sense from the perspective or readers. (We also moved some of the material to the Preface to provide context that would otherwise have been lost).
And yet, the excised text did provide context and insights that I still think are important,
And so, here is the recently deceased—and now resurrected—foreword to AI and the Art of Being Human:
Foreword
It’s 3 a.m. and you can’t sleep. Again.
Maybe you’re a founder watching your entire business model evaporate in a flash update from OpenAI. Maybe you’re a parent whose ten-year-old just asked ChatGPT for advice instead of you. Maybe you’re a designer who has just seen three years of developing your signature style replicated by Midjourney in fifteen seconds. Or maybe you’re simply human, lying awake with that peculiar vertigo that comes from watching the ground shift beneath your feet while technology changes the world around us ever faster.
We know that feeling. We’ve lived it.
For Jeff, it crystallized in the unsettling realization that many of today’s most celebrated AI startups are masquerading as incumbents—churning out astonishing growth metrics, headlining massive raises, and parading enterprise logos, all while hiding fragile foundations. It’s a hall of mirrors: some of the revenue is real, but much of the traction is theater as much as substance—products built on subsidies, contracts signed out of FOMO, and valuations inflated by the belief that the rising tide of AI will lift all boats.
The challenge isn’t just backing winners here—it’s learning to see through the fog of performative success, and distinguishing enduring value from capital-fueled illusion.
For Andrew—who has studied transitions like this for decades—the disruption struck him anew as he witnessed the jarring challenges and the transformative potential of generative AI sweeping through academia. It was a moment of awe at the possibilities that were being promised, but one that was tempered by the realization that, as machines replicate what we do, there’s a growing urgency to rediscovering who we are.
And for both of us, the venture capitalist building AI communities and the professor exploring the social implications of this speed-of-light technology, it became clear that, despite our different backgrounds, we were grappling with exactly the same question: how to hold on to and celebrate what makes us uniquely who we are in an age of AI.
That’s when we knew we had to write this book. Not another breathless manifesto about AI’s promise or another dire warning about existential risks. But something more useful and more urgent: a practical guide for staying human when machines can do so much of what we thought made us special.
The reality is that we’re all Elena in Munich (the first character you’ll meet in the book), staring at our screens at 2 a.m., watching AI complete our thoughts with uncanny precision. We’re all Sara in Monterrey, finger hovering over the panic button while algorithms scream for action. We’re all ten-year-old Leo in Stockholm, adding our hand-drawn stardust to AI’s technical perfection because something in us knows that our “wobbly” lines matter in ways the machine can’t comprehend. And we’re all Hiro in Osaka, discovering that a seven-minute pause before deploying an AI system can make the difference between efficiency and ethics, or between optimization and wisdom.
We’re living in a time when the question “What makes me me when technology can finish my next sentence, choice, feeling, or action?” isn’t philosophical anymore, and it’s not about someone else. It appears when your AI assistant drafts emails in your voice but somehow misses your unique personality. When LLMs seem to offer a license to print money, but not necessarily create value. And when your students feel like they have access to infinitely more than you know, just from a quick ChatGPT conversation. As AI becomes increasingly good at mirroring what we do, it’s easy to lose sight of who we are. And yet, as we saw even more clearly while writing this book, with the right approach AI can reveal and even enhance who we are—and who we aspire to be. It’s a mirror that both reveals and empowers. But to do so, it needs a guide that enables us to hold on to ourselves and not get lost in the tsunami.
This is what we set out to write with AI and the Art of Being Human.
At this point, we should probably put our cards on the table and acknowledge that we worked incredibly closely with AI as we wrote the book—using Anthropic’s Claude which, despite its occasional frustrations, impressed us with its eloquence and apparent ability to mirror and enhance our shared vision. This was intentional. We needed to live what we were learning. Of course, we had our worries—how would publishing an AI-assisted book damage our credibility (Andrew especially worried about this as a fiercely human author!). Would the irony of using AI to write about being human be too much? At the end of the day though, it was clear that this book could not have been written without the learning and insights gained from working closely with one of the most powerful AI models available.
Yet to be clear, this is not a fly-by-night AI-generated book that took a few hours to whip together. It’s the product of months of discussion and research, of ideation and planning, and foundation building, as well as week upon week of very human refinement following the initial drafts.
The approach to working with Claude—and the whole library of resources and deep prompts that were a part of this—took over three months of intensive work. And every word, sentence, paragraph and chapter of the final book has our human stamp on it. The result: working with AI made this far more than something either of us could have produced on our own.
As we worked on the book together, we both discovered something quite unexpected: partnering with Claude while maintaining our human agency became a living laboratory for the very things we explore and advocate in it. Every chapter, every framework, every story, emerged from this dance between human insights and machine capability. Some passages that Claude helped craft are far better than anything we could have written alone. Others we rewrote many times over because the machine kept failing to capture what we were looking for.2 And some even moved us to tears as they (to use a recurring motif) reflected what we felt in our souls and yet struggled to articulate.
This collaboration has taught us what we’re now sharing with you: partnering with AI isn’t about giving up who we are or surrendering to technology. It’s about developing and exercising the very essence of who we are. It’s how we become fully human in an age of AI, especially through the lens of what we refer to here as the four inner postures: Curiosity, Intentionality, Clarity, and Care. These may feel like buzzwords, but here they are not. They’re practices that are as concrete as the seven-minute pause, the Intent Map, and the many other tools and guides you’ll find in these pages.
The stories you’ll read here—Elena’s mirror moment, Luis’s identity crisis when AI replicated his coding style, Dr. Hana catching algorithmic bias in real-time—are fiction. This particular form emerged as we worked with our AI partner, but it quickly became apparent just how powerful it is as it illuminates possibilities in ways that simple facts and case studies cannot.
We are, after all, beings that live and breathe stories. They’re how we make sense of the world and the future before us. And they show what it actually feels like when your expertise becomes replicable, when your values meet market pressures, when you have to choose between efficiency and dignity with real humans hanging in the balance—and when you suddenly begin to realize the potential of AI to enable you to be you, the real you.
This does mean, though, that to fully benefit from the book you need to embrace the stories and what their purpose is; not to be literal reflections of the world we live in (they are not messy enough in many cases, and we intentionally rely on a dash of “movie magic”3 at times to make a point that transcends what is logically possible), but to reveal deeper truths and more profound possibilities than any real-world accounts could.
This is also a book of clear cadences, rhythms and—most importantly—practical tools. Again, this is intentional—despite what our AI mentor balked at in a critical editorial read as leading to a book that’s “too predictable” and “not messy enough” (proof that, amazing as Claude is, it still struggles to understand the transformative core of what we set out to do!).
The tools here all emerged from our three-way collaboration: a (to us) profoundly serendipitous synergy between Jeff’s business perspective and expertise, Andrew’s deep grounding in the intersection between technology and society, and Claude’s uncanny ability to make connections and reveal insights that would otherwise have remained hidden to us both. What emerged were tools like the 4-Lens Scan that help you see what (and who) algorithms miss. And the CARE Loop, which builds compassion into an organization’s DNA. Or the Roadmap Canvas that helps transform good intentions into daily practice. Each one conceived and designed for those places where theory meets reality: boardrooms where quarters can’t wait, classrooms where futures are being written on the fly, and kitchen tables where parents try to explain to their children why human judgment still matters.
And to be completely candid, we also collaborated with AI out of necessity. There’s a growing urgency to the questions we’re asking here and the tools we provide—something both of us feel and experience every day in our different communities. As we entered this collaboration, we were keenly aware that there was a window of opportunity here, but that it was a window that was closing rapidly. This isn’t about human relevance in an age of AI—that’s not going anywhere. Rather, it’s the window of opportunity that allows us to shape how AI and humanity evolve together in a conscious manner. The code being written today, the habits being formed, the systems being scaled—these will become tomorrow’s physical, institutional, and social infrastructure, as hard to change as city planning or language itself. We have, perhaps, just five years if not fewer, to integrate wisdom and intelligence, alongside care and capability, as the AI technology transition gathers pace.
This is where the art of being human in the age of AI isn’t about competing with machines or rejecting them. It’s about something harder and much more profound: becoming more fully ourselves because of how AI challenges us.
Every algorithm that replicates what we do forces us to confront what we mean. Every efficiency that AI offers demands that we clarify what inefficiencies we choose to protect. Every pattern that machines recognize pushes us to discover what patterns only we can create—the ones that emerge from our mortality, from love, from our flaws even, and from the specific weight of being human in this particular moment in history.
As you read this book, some chapters will feel like coming home. Others will undoubtedly make you feel uneasy. And both responses are okay. We’re all at different points in this journey, and the book is designed to meet you where you are while pointing toward where you might go. Try the practices. Apply our frameworks. Use the tools. Start your own circles. The transformation happens not in understanding but in doing, and not in isolation but in community.
And one last thing before we wrap up this way-overlong foreword: this book is already obsolete in some ways. By the time you read this, new models, capabilities, and reasons to feel both wonder and worry will have emerged. That’s all right. The specific tools and how they’re used will evolve. But those four inner postures—Curiosity, Intentionality, Clarity, Care—remain. The technology will advance, but the choice to stay human remains. The future will arrive, but we still get to decide who we become when it does.
So welcome. Welcome to the questions that matter. Welcome to the communities of people choosing intention over automation. Welcome to the messy, beautiful work of being human in an age of artificial intelligence.
Let’s begin.
Jeff Abbott and Andrew Maynard
October 2025
The phrase “kill your darlings” is usually recognized as going back to Arthur Quiller-Couch who, in a 1914 lecture at the University of Cambridge (and subsequently in the 1916 book On the Art of Writing) advised: “Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it — whole-heartedly — and delete it before sending your manuscript to press Murder your darlings.” At some point this became “kill your darlings”—a rewording often, but erroneously, attributed to William Faulkner. Whether murder or kill, the meaning is clear—published works are often sharper, clearer, and more impactful, when you have the discipline to remove the bits you love, but that don’t serve the reader.
As you would expect, early drafts contained several AI hallucinations and a fair scattering of AI “tells”—the types of words and phrases that scream “AI generated.” In our manual editing and fine-tuning we’ve eliminated, or at least taken the edge off, most of these. But one we decided to keep as a reminder of the book’s “silent” AI partner, and because it doesn’t detract from the narrative: For some reason, Anthropic’s Claude has a strong preference for fictional characters with the first or last name “Chen.” The AI claimed that this is a widely used name globally, and the frequency of use made the book feel more authentic. We disagreed but kept a few in anyway as the characters are all distinct. And it’s a good reminder that this is very much an experiment in human-AI collaboration.
What Andrew would refer to as Love Actually moments—the enduring and much loved 2003 movie where improbable timelines are transcended by the power of the stories they convey. This is especially true in the book’s concluding chapter.
good stuff Jeff, just got your book :)