Spoiler Alert: I rebuilt my book for AI!
I've been experimenting with translating my 2018 book Films from the Future into a website designed primarily for AIs. Here's how it went.
As an author, I write for human readers. As I’ve noted before though, there’s a growing trend in AIs being the predominant consumers of the written word, often acting as a translator between source and consumer.
But if this is the case, why not embrace the trend and write directly for AI?
The idea intrigues me — and not only me. There’s a growing trend in creating AI-first content online. And so I thought I’d dive in and experiment with rebuilding my book Films from the Future: The Technology and Morality of Sci-Fi Movies as a website designed primarily for AI consumption.
The choice of book was very intentional. Even though Films from the Future was written in 2018, the underlying concepts, ideas, and observations are, if anything, far more relevant now than they were eight years ago. And as a result I’ve been thinking about ways to breathe new life into it.
And given the growing shift toward using AI apps to explore and synthesize information, repackaging it for AI consumption made a lot of sense.
Plus, it gave me the chance to add new material to the book’s original content while moving away from a title that only a publisher could love (I was never a fan of Films from the Future).
The result is the rather cheekily named website spoileralert.wtf.1
The website built on a foundation of of 127 markdown files2 that include the book’s original content, together with additional material on cross cutting themes, connections to emerging trends and issues, and personal refections from me on everything from the book’s backstory to movies that did and did not make the cut. But unless you are comfortable reading markdown files online, these are not intended for human consumption.3
Rather, they are coordinated through a master AI-legible file — llms.txt (building on a standard proposed by Jeremy Howard) — that allows AI platforms to act as a personal guide to the website.4
This is a markedly different approach to simply uploading the book into an AI (assuming you could get hold of the PDF in the first place), or building an AI bot or agent.5
For one, it allows AI models to navigate and synthesize far more material than is presently possible with either of these approaches. It also means that anyone using the site can decide for themselves which AI platform to use, and how to use it.
There’s also an added advantage that, if you are using something like Claude or ChatGPT with memory turned on, the website plus AI become a highly personal guide to exploring emerging technologies and their responsible and beneficial development and use.
Reflecting the website’s AI-first design, the human-facing part of spoileralert.wtf is minimalistic. Apart from a brief introduction and overview, the landing page includes a prompt to cut and paste into an AI of your choice, and that’s pretty much it:
… although, being a writer, I couldn’t resist adding a little more stuff below the sign off!
At this point, this is still an experiment in AI-focused publishing. But as more and more people rely on AI apps rather than direct sources for information, I suspect that it’s a direction that’s likely to become increasingly important.
With that, please do try it out and let me know how you get on: spoileralert.wtf.
And if you’re interested in more information on the technical details and the experience of building the website, read on …
The Below the Fold stuff
The website architecture
As I mentioned above, the website is built around a comprehensive llms.txt file that gives any AI that’s pointed toward it a clear map of what the site includes and where to look for specific content. If you’re interested you can read the llms.txt file here — it’s a markdown file and so easier to read by downloading and opening in a markdown editor.
This file describes the site’s architecture and links to 127 markdown files that provide guidance on interpreting and engaging with the book and website content, as well as allowing the AI access to the full text of the original book.
Within these files, six top-level domain guides cover:
Emerging science and technology
Responsible and ethical innovation
Navigating the future
The twelve movies that the book draws on
Post-2018 developments, and
Complex emerging questions
These provide sufficient context allow the AI to navigate book content and associated material according to specific themes and areas.
Each domain file then links to a number of specific topic files — over ninety of them. These were identified and fleshed out working with Claude Code, and create a thematic guide to the book that cuts across chapters and issues.
Finally, there are six supporting files that cover everything from discussion questions and my original movie shortlist, to an educator’s guide, and even some previously unpublished book trivia.
If you’re interested, the full site structure can be explored through the Contents web page.
The process
The complete website was built while working closely with Claude Code. While I had a very strong conceptual and editorial steer, Claude Code was pivotal in helping translate this into reality. Claude Code helped develop the site’s architecture, drafted content files, generated html code, and helped debug/refine what ended up being a deeply integrated and interconnected set of resources.
In all there are nearly 400 files associated with the site, as many of the markdown files have associated html files. And all need to be cross-linked and cross-referenced. Both the magnitude of a project like this, and the complexity of tracking hundreds of links, would have made this a near-impossible task for me to take on unaided.
Similarly, Claude Code could never have generated the website without my input and steer — the feel, functionality and purpose of the site, as well as the type of content, all come from me. And very intentionally, the site incorporates my voice, tone, insights, perspectives, and sensibilities, in ways that were only possible through working collaboratively with Claude Code.
Through all of this, Claude Code was a joy to work with — especially when adding new files that requited deep integration over hundreds of documents! The barrier to entry on a project like this is remarkably low, and the ability to talk through ideas, plans, and implementation as if talking to a colleague or co-worker was, for me, a game changer.
The extra bits
One of my hopes with this exercise was being able to add substantial value to the original book by making the content more relevant than ever to the present day. I also wanted the chance to add further content that users could not get anywhere else. As a result, if you assess content by file count, the original book constitutes less than 10% of the rebuilt version.
I won’t give too much away here as it’ll spoil the joy of discovery as you explore the site through your AI of choice. But there is information embedded in the site’s files on the backstory to the book that I haven’t shared before, details of films I considered for the book but that never made it, commentary from Claude on what I left out, and much more.
There are also a series of conversations on the site between simulated users and Claude. I added these as I found I was far too close to the material to get a clear sense of whether the website was in any way useful. And so I asked Claude to generate a number of user profiles, and then tasked Claude Code to simulate conversations between these and an AI primed with the website’s prompt.6
They are a little “AI” in places I must admit. But they are also a great way to get a handle on how this idea of an AI-legible “living book” works. And they are a lot of fun to read!
Also, for the tech geeks, all the files are available to explore and dive into on Github.
What’s working well, and what’s not
As I’ve noted above and in the footnotes, it’s tempting to consider this exercise as simply a glorified version of giving an AI a copy of the book and asking about it (like you might do in NotebookLM for instance), or building an AI agent/bot around it.
But spoileralert.wtf is very different from either of these.
And this makes it intriguing, ground-breaking,7 and sometimes just a little frustrating.
Unlike using the book through RAG, or developing a bot like a Gem (Gemini) or GPT (ChatGPT), the llms.txt-based approach allows an AI to navigate through a vast corpus of material, and to draw on connections that would otherwise be hard to make.
It also allowed me to architect the experience at a level of nuance and sophistication that would have been out of my control with a bot/agent, or by simply letting people upload the book to an AI platform themselves.
And this is the beauty of using an llms.txt file as a guide for AIs to navigate websites that are designed specifically for them. In this case, it enables an LLM-based AI to leverage a map/hub/spoke/web model that is designed specifically for how it consumes and utilizes content.
But there are issues with this approach. Not least is the challenge that, at present, most AI models do not recognize llms.txt files by default. And this is why, in the current configuration, the copy and paste prompt includes specific instructions to read the file.
Then there are the AI platforms themselves. It turns out that some models are currently not advanced enough to engage fully with the material, or are simply not designed for this type of content.
For instance, there are still AI systems (including those that Microsoft uses) that use indexing by Bing to access web content (yes, you read that correctly). And so anything not indexed by Bing is essentially invisible to them.
And, it turns out, Bing refuses to index anything with the domain wtf. Who would have guessed!
Gemini has a similar issue — not with the domain, but with page indexing on Google. As a result, until a site is fully indexed by Google, parts of it will remain invisible to Gemini. And to complicate things, Google does not seem to like indexing markdown files.
To get around this, every markdown file on the website has a parallel html file. There’s also a parallel llms-html.txt index that provides the key to using them — along with instructions in llms.txt to use this as a backup if issues are hit retrieving markdown content.
This, I was pleased to see, works surprisingly well, with Gemini (and even Claude at times) switching to html content if the markdown files are being troublesome.
With this, here’s where things stand as of writing with different models:
Claude Opus 4.6 (Extended thinking): works very well indeed.
Claude Sonnet 4.6: Also works well.
Gemini (Pro): Somewhat flakey, but possibly because it’s relying on files that have been indexed by Google (not all of them yet). And it’s not great with markdown files. This will hopefully improve over time.
ChatGPT: Good when it’s working well, but unreliable!
Grok (Expert): Pretty good.
DeepSeek (DeepThink): Enjoying the pants on fire hallucinations here.
Perplexity: Not really functional at all — at least with the free version.
The bottom line seems to be that most platforms will provide useful but superficial insights around the book using the website, but the simper ones (and DeepSeek) are prone to missing stuff, not digging deep enough, veering off toward other sources, or simply making things up.
The more powerful the model, the larger the context window, and the more it utilizes reasoning/thinking modes, the better it is — with Claude far outstripping the rest.
And a final word
I have no idea whether anyone will find this exercise useful or interesting — and so love feedback in the comments below.
I do know that there’s content in the 2018 book that is deeply relevant to this moment in time. And that this is buried in a book that very few people will read because a) it’s a book, b) it’s printed on paper (unless you have the Kindle version or audiobook of course), and c) it’s more than six minutes old (at least, it feels like this is the current attention-lifetime for new material).
And because of this, I feel quite strongly that new ways of making that content accessible and relevant should be explored.
The approach here of creating content intended for AI seems like a potentially interesting way forward, as it makes the book far more useful to someone using AI than it would otherwise be.
More personally though, this whole exercise has given me the opportunity to revisit the content and the thinking behind the book, as well as flexing my creative muscles while having some fun along the way.
And there’s been something quite generative working with Claude Code on the additional material — including stuff that I’ve never written about before.
But, of course, to find that, you’ll have to try the spoileralert.wtf prompt out for yourself and see where it takes you 😁
There are, not surprisingly, many layers to why I chose this particular URL. Do find out more though, you’ll have to point your AI to it and ask it why!
Markdown files are becoming the de facto standard for content written for AI consumption. And compared to regular web pages they offer a lot of advantages, including eliminating an awful lot of formatting and contextual content that is irrelevant to an AI, but eats up tokens anyway.
A bit of a spoiler alert here: If you are desperate to read the AI-intended content and are put off by the markdown formatting on the screen, you can access web-formatted versions from here: https://spoileralert.wtf/browse.html
I’m not sure there are any AI platforms that actively use the llms.txt protocol at the moment — which is a shame as the idea is that when an LLM visits a website the first thing it does is read llms.txt to allow it to navigate and access the content as an AI and not a human. But there’s nothing like a good bit of future-proofing!
Of course, there will be readers who are adamant that everything here could be replicated by uploading files into ChatGPT or NotebookLM, or creating an AI bot or agent. But trust me on this, the llm.txt plus an integrated markdown file architecture is a fundamentally different approach to making content AI-navigable while not locking in to a specific platform.
Claude Code was instructed to create two agents with a firewall between them — one representing a user, and one representing Claude primed with the spoileralert prompt — and then simulate a back and forth between them.
There are people who are building AI-based extensions to books, and AI-legible versions of books in uploadable files (similar to our work with AI and the Art of Being Human). But I’ve struggled to find anyone currently using an llm.txt-markdown architecture in the same way that it’s being used here.



