New to the Future of Being Human? Start Here:
If you’ve recently subscribed to the Future of Being Human Substack, or have just stumbled across it, this is a great place to start to get a sense of what the Substack is about, what it covers, and how it approaches tech, society & the future.
The Future of being Human Substack is:
Future‑centric and responsibility‑driven. Consistently treating the future as something we co‑create and owe duties to, not just something that “happens” — with humans as “architects of the future” who must close the gap between what we can do and what we should do.
Transdisciplinary and convergence‑focused. Moving between physics, complex systems, ethics, policy, social science, philosophy, communication, and more, with a strong eye on how converging technologies (AI, biotech, neurotech, materials, etc.) create qualitatively new kinds of transitions and risks.
Nuanced in its approach to risk and social value. Instead of treating “risk” as just probability×harm, reframing it around threats to what people value — identity, dignity, belonging, aspiration — and advocating for governance that is explicitly attentive to those less tangible but deeply important things.
Committed to inclusion and de‑marginalizing futures. Repeatedly asking who gets to design, benefit from, and be harmed by emerging technologies, and arguing that futures built only by the powerful are, by definition, unjust and unstable.
Story‑ and culture‑led. Using stories, metaphors and shared cultural objects as tools to help people think together about complex futures.
Willing to experiment with AI while interrogating it. Not writing about AI from the outside, but actively using it and then turning those experiments into occasions to probe risk, alignment, benefits, scholarship, labor, meaning, and more.
Ten starter posts to orient you:
1. Design Principles for De-Marginalizing the Future (2019)
Design Principles for De-Marginalizing the Future
I was recently asked to write a short piece prompted by the question “If you could create the Guide, what would be in it? What would be important elements to be included?” The prompt was part of The Guide Project — a new Arizona State I…
2. What can sci-fi movies teach us about technology ethics? (2019)
What can sci-fi movies teach us about technology ethics?
Who’d of thought a few years ago, that ethical innovation would be such a hot topic. Yet every other day, it seems, some company or scientist discovers to their dismay that, as technologies become ever-mor…
3. We need to rethink our relationship with the future (2020)
We need to rethink our relationship with the future
Whichever way you look at it, 2020 is a year that has strained our relationship with the future like never before. And unless we rethink this relationship, we could be heading for a catastrophic breakup that will ultimately serve no-one.
4. In Bill Joy’s “Why The Future Doesn’t Need us” AI is nowhere, and everywhere
In Bill Joy's "Why The Future Doesn't Need us" AI is nowhere, and everywhere
In April 2000, Sun Microsystems co-founder Bill Joy shook the world of tech in a wide-ranging article in Wired magazine.
5. We have a technology problem – and it probably isn’t what you think (2024)
We have a technology problem – and it probably isn't what you think
I’ve lost count of the times I’ve been asked if I’m a techno-pessimist or a techno-optimist. It’s a frustrating question as it makes little sense to me. It’s a bit like asking if I’m an oxygen pessimist or optimist — “neither” is the answer of course as oxygen is integral to who I am, and not an add-on that I can take or leave.
6. Navigating the Ethical Dilemmas of Human-Enhancing Brain-Computer Interfaces (2024)
Navigating the Ethical Dilemmas of Human-Enhancing Brain-Computer Interfaces
In 2019, researchers at Neuralink published a paper in the Journal of Medical Internet Research that described the company’s research on invasive brain-machine interfaces (BCIs) — devices inserted into the outer laters of the brain to allow signals to be extracted from and read to small clusters of neurons.
7. The Artisanal Intellectual in the Age of AI (2025)
The Artisanal Intellectual in the Age of AI
I must confess that, since getting my hands on OpenAI’s Deep Research (not to be confused with the just-released Deep Research feature on Perplexity), I’ve been intrigued by how it’s forcing me to rethink what it means to be someone who makes a living by thinking.
8. The “hard” concept of care in technology innovation (2025)
The "hard" concept of care in technology innovation
A little over a year ago I was sitting in a meeting where some of the world’s leading experts were discussing emerging developments in AI. But what caught my attention was not the latest in frontier models or novel advances in machine learning, but a psychologist talking about the concept of care.
9. What does responsible innovation mean in an age of accelerating AI? (2025)
What does responsible innovation mean in an age of accelerating AI?
A new speculative scenario on AI futures is currently doing the rounds, and attracting a lot of attention. AI 2027 maps out what its authors consider to be plausible near-term futures for artificial intelligence leading to the emergence of superintelligence by 2027, followed by radical shifts in the world order.
10. AI at a Crossroads: The Unfinished Work of Aligning Technology with Humanity (2025)
AI at a Crossroads: The Unfinished Work of Aligning Technology with Humanity
I’ve resisted doing this for so long, but today two very different but equally important reports on AI made me break my rule — and as a result this article was written entirely by ChatGPT. I’m posting it because I think the insights here are profound and timely — and articulated far better than I could do in just a few hours.












