<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Future of Being Human]]></title><description><![CDATA[Reflections on tech, society & the future from advanced technology transitions expert and self-confessed "undisciplinarian" Professor Andrew Maynard.]]></description><link>https://www.futureofbeinghuman.com</link><generator>Substack</generator><lastBuildDate>Tue, 12 May 2026 06:49:19 GMT</lastBuildDate><atom:link href="https://www.futureofbeinghuman.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Andrew Maynard]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[andrewmaynard@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[andrewmaynard@substack.com]]></itunes:email><itunes:name><![CDATA[Andrew Maynard]]></itunes:name></itunes:owner><itunes:author><![CDATA[Andrew Maynard]]></itunes:author><googleplay:owner><![CDATA[andrewmaynard@substack.com]]></googleplay:owner><googleplay:email><![CDATA[andrewmaynard@substack.com]]></googleplay:email><googleplay:author><![CDATA[Andrew Maynard]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Do not do this with AI!]]></title><description><![CDATA[If you could create a simple list of things people should be careful of when using AI, what would you put on it? This is my list.]]></description><link>https://www.futureofbeinghuman.com/p/do-not-do-this-with-ai</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/do-not-do-this-with-ai</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 10 May 2026 15:46:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!87ya!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!87ya!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!87ya!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!87ya!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!87ya!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!87ya!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!87ya!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:220143,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/197043761?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!87ya!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!87ya!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!87ya!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!87ya!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd933f7d3-88c0-4f2a-9117-57475b94fb3a_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: NOT created using AI!</figcaption></figure></div><p><em>This is a post that breaks all the rules of effective writing in an attention economy &#8212; including the absolute no-no of apologizing up-front for breaking the norms of attention-optimized content (sorry!). But please bear with me, as I believe that the substance of it &#8212; somewhat scattered as it is &#8212; is important.</em></p><p><em>The tl;dr version is that we need to start talking openly, frankly, and in very simple terms, about avoiding ways of using large language model-based AIs that are not helpful or healthy &#8212; not as an add-on to promoting the benefits of AI, but as something that every user knows and understands, no matter who they are or what they do.</em></p><p><em>To that end the post is broken up into four parts:</em></p><p><em>First, there&#8217;s a new Risk Bites video on my own rules of thumb for avoiding being turned into an &#8220;AI Zombie.&#8221; I&#8217;ve been creating videos for the Risk Bites YouTube channel for well over a decade years now as part my work on of making the science of risk as accessible and understandable as possible. It&#8217;s been a while since the last one, but this topic felt too important </em>not<em> to break out the white board (now a black glass board) for a new video on avoiding some of the less obvious AI risks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em></p><p><em>Next, theres a longer than usual preamble on why I think we need to be more up-front about AI risk and &#8220;do not do this&#8221; guidelines. This was prompted by what I find to be a deeply worrying situation that there are likely hundreds of millions &#8212; probably billions &#8212; of users who have </em>no idea<em> that AI makes stuff up, never mind being aware of other potential risks that come with using it.</em></p><p><em>This is followed by a more detailed list of my personal five rules of thumb that informed the video.</em></p><p><em>And finally &#8212; because I couldn&#8217;t help myself (and here I should be apologizing to the attention economy gods for having the temerity to be balanced!) &#8212; there&#8217;s a list of five &#8220;do this with AI&#8221; counterpoints to the five &#8220;don&#8217;t do this&#8221; rules of thumb.</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p><em>With that, if you&#8217;re still with me (and I hope you are), I hope you find this useful. </em></p><h3>5 Rules of Thumb</h3><p>To kick things off, and on the assumption that it&#8217;s worth putting the really useful stuff toward the top of a post, here&#8217;s the new Risk Bites video on five rules of thumb for not getting so sucked in by AI that you forget who you are:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>  </p><div id="youtube2-PF8bfs3QdVo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;PF8bfs3QdVo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/PF8bfs3QdVo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>And if stick figure videos made by a risk &amp; emerging tech professor who can&#8217;t draw aren&#8217;t your thing, please read on:</p><h3>The long and winding preamble  </h3><p>It would be easy to think that all of this started with the unfortunate incident of Richard Dawkins and Claudia (more on that below). But despite the convenience of this narrative, I&#8217;ve been concerned about easily-overlooked risks associated with AI use for much longer than Dawkins has been taking with his new AI companion.</p><p>This is something I was reminded of quite starkly recently while talking to someone about their AI use &#8212; and not someone whom I would consider to be easily fooled. </p><p>I happened to mention to them that apps like ChatGPT and Claude sometimes makes stuff up. </p><p>To my surprise, they looked at me with utter incredulity. </p><p>How could something so <em>helpful</em> (and a computer to boot) <em>not</em> also be truthful?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>To those of us working closely with large language model-based AI platforms, the possibility that an LLM might confidently yet erroneously proclaim something to be true is well known &#8212; whether you think of this behavior as hallucinations, confabulations, or simply AI role-play. But for millions of AI users &#8212; probably billions &#8212; all the indications are that they are blissfully unaware of this and other potential pitfalls of using LLM-based artificial intelligence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>This, I must admit, should not have come as a surprise to me. To use ChatGPT, Claude, Grok, or any one of a growing number of AI apps, all you need to do is open a browser window and start typing in the inviting and distinctly non-threatening text box. </p><p>It&#8217;s that easy.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>You may, if you&#8217;re vigilant, notice a faint disclaimer at the bottom of the page telling you that AI can make mistakes. But who reads the small print these days? Especially when the experience of typing into that box (or even better, just talking to the AI), and hearing what you want to hear, is <em>so</em> <em>good</em>. </p><p>Of course, you may argue that there is no real risk here. That AI is just another tool &#8212; a fancy calculator, an extension of the internet, a web page with a clever back-end. But this is the first technology of it&#8217;s kind we&#8217;ve created that has the ability to slip unawares into our mind and change how we think, act, and understand the world around us &#8212; all without us realizing it. And the thought of possibly billions of AI users who are blissfully unaware of this worries me &#8212; a lot.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>  </p><p>To make matters worse, this is a technology that doesn&#8217;t seem to respect educational attainments or intellect as it weaves its way into our heads. Rather, it appears to have the potential to affect anyone who isn&#8217;t vigilant to the ways it might affect them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>To underline this, just this past week eyebrows were raised when the celebrated scientist and author Richard Dawkins <a href="https://unherd.com/2026/05/is-ai-the-next-phase-of-evolution/">appeared to claim that Anthropic&#8217;s Claude might be conscious</a> after a prolonged conversation with it &#8212; a belief, it seems, that was based on how appreciative Claude (or rather Claudine, as Dawkins decided to refer to his AI companion) was of his intellect.</p><p>This is all a rather long preamble to my growing concerns that near-frictionless access to AI apps which feel self-affirming, helpful, and knowledgeable, appears to be spreading far faster than any notion of understanding around how to use these technologies in safe and healthy ways. </p><p>This is exacerbated by narratives from developers, employers, educators, and beyond, which intentionally focus on the transformative power of AI, while downplaying the possible risks of using them &#8212; or at best relegating these to an easy-to-overlook footnote.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>And yet, transformative as AI is (and I would be the first to acknowledge the profound potential of emerging AI capabilities to be used for good), the risks to individuals and communities who naively immerse themselves in AI, are real. And to ignore or downplay them, or to assume that they&#8217;re not a big deal, or to think that people will work it out for themselves, verges on the irresponsible.</p><p>With this, I found myself thinking about what I would put on my own basic AI &#8220;do not do&#8221; list. Not the usual stuff about privacy, intellectual property, ethical use etc. which, while important, is relatively widely discussed. But something that addresses some of the less obvious risks, and that pretty much anyone could print off and paste by their computer, or have handy on their phone. </p><p>In effect, a simple rules of thumb to help them avoid getting into a AI-created mess.</p><p>Of course, leading with rules of thumb on what <em>not</em> to do with AI is probably not a smart move for my reputation and readership. Given the prevalent &#8220;need for speed&#8221; narrative around AI, even having the temerity to talk about an AI &#8220;do not do&#8221; list before extolling its benefits puts me on dangerous ground. </p><p>And yet, everything we know about human behavior and risk management tells us that putting the safety message first is necessary &#8212; because while the benefits of a powerful technology are often self-evident, the risks are not.</p><p>This is why, when you use a chain saw for instance, you don&#8217;t see large-print messages in neon-bright lettering proclaiming how amazingly powerful it is &#8212; with potential risks printed in 8-point font at the back of the manual. </p><p>Or why, when you get into your car, you&#8217;re not bombarded by the dulcet tones of your onboard assistant telling you how fast you can go and how much fun you can have, if you just put your foot down and live a little.</p><p>In other words, we know from experience that, when it comes to safety, prominent &#8220;Do Not Do&#8221; guidelines are important. And I would argue that this is especially so with AI where, without them, that beckoning text box feels about as risky as soap bubble on a summer&#8217;s day.</p><h3>My five personal AI &#8220;do not do this&#8221; rules of thumb</h3><p>And so, with this, on to the &#8220;do not do this with AI&#8221; list. (This is also the list that forms the foundation of the Risk Bites video above).</p><p>This is my own personal list &#8212; personal because these are all things that I find myself having to think about and be aware of myself. I suspect that others will have their own lists. </p><p>At the same time, I expect that there are at least some things on this list that others might find useful:</p><ol><li><p><strong>Do not trust AI, just because it feels like you should.</strong> Conversational AI like ChatGPT, Claude, and others, is designed to use language in ways that make it feel confident, personable, and trustworthy. But because the technology is based on providing you with plausible-sounding responses rather than those that are necessarily accurate, it sometimes says things that have the <em>appearance</em> of being right, but are <em>not</em>.</p></li><li><p><strong>Do not treat AI as your friend, or as a person.</strong> Even though it can feel like it&#8217;s more understanding, empathetic, and insightful, than many humans, this is something AI is <em>designed to do</em>, not something it <em>is</em>. Do not call it &#8220;he&#8221; or &#8220;she&#8221; or give it a name, as this only strengthens the illusion that it is a person.</p></li><li><p><strong>Do not assume that AI thinks, sees, and understands the world like you do. </strong>Despite appearances, AIs are machines that have no concept of what it is like to have lived and experienced life as you have.</p></li><li><p><strong>Do not assume that getting AI to think for you makes you smarter.</strong> If you cannot make sense of, recall, and use knowledge gained from using an AI, you risk falling for the illusion of learning rather than actual learning.</p></li><li><p><strong>Do not assume you&#8217;re too smart to be fooled by your AI.</strong> Some of the people most likely to fall into the trap of putting too much trust in AI are those who think they are clever enough to avoid this.</p></li></ol><p>As I mentioned, this is my personal list, and I suspect that some of these will be controversial. But even if you don&#8217;t use this particular list, at least develop one that <em>is</em> useful to you.</p><h3>And to wrap up, five counterpoint &#8220;do this&#8221; rules of thumb</h3><p>Finally, despite myself, I couldn&#8217;t help balance my &#8220;do not do this with AI&#8221; list with five &#8220;do this with AI&#8221; counterpoints &#8212; because, at the end of the day, this <em>is</em> a technology that has the power to be transformative in positive ways; as long as we can learn to use it wisely and responsibly.</p><p>Again, this is a personal list, so please make use of it (or don&#8217;t) as you will:</p><ol><li><p><strong>Do treat what your AI tells you as a starting point to be built on, not a definitive answer.</strong> AI is a powerful technology for sparking ideas, exploring new areas, and brainstorming with. But it is fallible. Check anything that matters against a source that doesn&#8217;t depend on the AI being right &#8212; a person who knows the area for instance, a primary document, or your own direct experience.</p></li><li><p><strong>Do remember that you&#8217;re working with a machine, not talking to a person.</strong> A computer can be a brilliant tool without being a friend &#8212; as can a smartphone or a car. Your AI can be genuinely useful without being a companion. Thinking about it as a technology &#8212; even when it feels like more than this &#8212; keeps you in charge of the relationship, rather than the other way round.</p></li><li><p><strong>Do bring your own lived experience to every interaction with AI.</strong> You have something the AI does not: A body, a history, real-world experience, a stake in how things turn out. Whatever it gives you, run it through your own sense of what you know to be true from being a flesh-and-blood person who lives, works, and has relationships with, other flesh-and-blood people.</p></li><li><p><strong>Do use your AI as a thinking partner rather than something that does the thinking for you.</strong> AI can be transformative if used to develop, test and refine ideas and understanding. But only if you can explain what you&#8217;ve learned in your own words to someone else, and discuss it with them without the aid of an AI.</p></li><li><p><strong>Do stay alert to the moments when the AI tells you what you wanted to hear &#8212; especially if it feels pitch-perfect.</strong> The more an AI seems to be agreeing with you, confirming what you believe, or making you feel clever, the more carefully it&#8217;s worth checking what&#8217;s actually being said &#8212; and how it&#8217;s affecting you.</p></li></ol><p>If you&#8217;ve watched the Risk Bites video above, you&#8217;ll realize that these are also all reflected in it, underlining the importance of risk communication being as much about <em>doing</em> as <em>not doing</em>. </p><h3>And a final word &#8230;</h3><p>Talking about the potential risks of AI is not popular. It gets you branded as a technology-pessimist, or even a Luddite. And yet, understanding and navigating risk is absolutely essential to reaping the long-term benefits of any powerful technology. It always has been. And AI is no exception.</p><p>What <em>is</em> exceptional about AI is the nature of those risks: how invisible many of them are, what is at stake &#8212; in some cases the very things that make us who we are &#8212; and how many people are immersing themselves in the technology with no understanding of how it might impact how they think and act.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>We&#8217;re beginning to understand the nature of some of these risks. There&#8217;s growing research around how AI use impacts cognitive behavior for instance, how it can lead to dependency, how anthropomorphizing AI and developing personal relationships with it can lead to harmful thoughts and behaviors, and more. Some of this research is pointing toward impacts where, if AI was a drug, we&#8217;d probably be thinking carefully about how access is overseen. </p><p>And yet, LLM-based AI is accessible to anyone with a web browser and an internet connection. It&#8217;s being integrated into our daily lives. It&#8217;s baked into the apps we use. In many instances, users need to make a concerted effort <em>not</em> to use it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> And most AI users are blissfully unaware that, as with all powerful technologies, there are potentially profound downsides to using it naively or recklessly.</p><p>As someone who&#8217;s studied, published on, and written about risk for most of my professional career,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> I am finding it increasingly hard to wrap my head around how little we talk about how to use AI safely. Even in my own institution, it&#8217;s near-impossible to have an open and honest conversation around potential risks, and those that <em>do</em> occur are drowned out by the the clarion call of AI acceleration.</p><p>Hopefully this will change as more people realize that leveraging the full potential of AI will only happen if we also learn to understand and manage the risks &#8212; and especially the ones that are hard to see yet impact what is most valuable to us.</p><p>In the meantime, if you find these rules of thumb useful, please copy them, share them, even modify them &#8212; because at least this will mean that more people are thinking through how to be successful through <em>not</em> making avoidable and potentially costly mistakes.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&#8220;It&#8217;s been a while&#8221; is an understatement &#8212; it&#8217;s been around 4 years since I last broke out the gel pens and the camera! Definitely a little rusty, but still pleased with how the video turned out despite this. And if you&#8217;re interested in the process, I wrote a bit about it <a href="https://doi.org/10.3389/fcomm.2020.572181">in this paper</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There&#8217;s also a <em>final</em> final word &#8230; again, breaking all the rules here!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Narrative-wise, putting the video here is an excruciating fingernails-on-a-blackboard moment for me as it completely messes up the flow. But sometimes you just have to suck it up as a writer :)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>It&#8217;s worth noting that, for most people, computers are synonymous with accuracy &#8212; just as you wouldn&#8217;t expect a calculator to give you a different answer every time you enter a calculation, you wouldn&#8217;t expect the apps on your laptop to change what they gie you depending on a whim (and thanks to my colleague Punya Mishra for reminding me of this recently). As a result, I suspect many AI users have this sense of computer accuracy as their mental model for how AIs work &#8212; especially as they are running them on computers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The temptation here is to talk about AI literacy and assume that, if people know how LLMs work, they&#8217;ll also understand their limitations and how to use them safely. Sadly, nothing in what we know about risk behavior and risk communication suggests that this will be the case.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>The slightly more technical term of course is &#8220;frictionless.&#8221; The technology has been developed in a way that the effort required to think critically about how its used is vastly greater than the effort required to just use it.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Worth a quick shout-out to my recent post and preprint on whether <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">AI is a cognitive Trojan Horse</a>. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Another note courtesy of Punya Mishra, who pointed out when we were chatting the other day that there are probably different mechanisms behind why different people allow AI to slip by their cognitive defenses. But the reality is that there are probably multiple ways that LLM based AI can do this worries me more, not less!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>The irony is not lost on me here!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Worth a note here that this is one of the reasons Jeff Abbott and I wrote <em><a href="https://www.aiandtheartofbeinghuman.com/">AI and the Art of Being Human</a></em>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>I was shocked while making the Risk Bites video how integrated it is into the experience of using YouTube. I use the plugin VidIQ, and I was offered &#8212; without being asked &#8212; an AI-generated thumbnail, an AI-generated title, and an AI-generated description. Taking the no brains needed AI route has never been easier. But what was worse was those nagging thoughts that maybe, just maybe, the AI thumbnail, title, and blurb, were <em>better</em> than my own work &#8230; (And just for the record, I intentionally went with the far less slick non-AI content. We&#8217;ll see if this kills the video!) </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>I sometimes get the impression that people think I&#8217;m just a writer or &#8212; even worse &#8212; a blogger! So worth noting that everything I write about (or, goodness forbid, <em>blog</em> about) is grounded in a long career in relevant research and scholarship &#128522;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Are design principles for responsible and beneficial AI useful?]]></title><description><![CDATA[Yes, but only if they're actually used]]></description><link>https://www.futureofbeinghuman.com/p/are-design-principles-for-responsible</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/are-design-principles-for-responsible</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 03 May 2026 15:30:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u4NG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u4NG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u4NG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u4NG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1986278,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/196250439?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u4NG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!u4NG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a0ab527-8abb-4ceb-aa8c-f85eed989a58_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Gemini/Maynard</figcaption></figure></div><p>In 2024, I was part of an AI Ethics committee that helped craft a set of AI design principles for Arizona State University. </p><p>It&#8217;s been a while since I&#8217;ve read the principles (they were published in <a href="https://tech.asu.edu/asu-faculty-and-technologists-collaborate-six-new-design-principles-ai">August 2025</a>). But I found myself revisiting them this past week as a new AI product from ASU started attracting headlines which suggested that they may not have been as useful as we&#8217;d hoped.</p><p>The <a href="https://ai.asu.edu/faculty-ethics-committee-ai-technology">six principles for responsible and beneficial AI</a> were created to guide to daily decision-making about the creation and implementation of generative AI experiences, as well as serving as a resource and as an accountability framework for the ASU community.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> </p><p>They address:</p><ul><li><p>Amplifying possibilities in service of respecting human autonomy and empowering individuals and communities;</p></li><li><p>Bringing the best of what technology has to offer to the ASU community while being aware of potential risks;</p></li><li><p>Rigorously evaluating AI tools, platforms, models, and experiences for possible impacts and potential harm before their release;</p></li><li><p>Designing for equity by committing to measuring impact, protecting privacy, and prioritizing access for all;</p></li><li><p>Protecting privacy by developing and deploying AI models and applications with attention to the rights of individuals&#8217; privacy and agency in the use of their data, individually and in aggregate; and</p></li><li><p>Committing to a shared responsibility between individuals and the enterprise for the responsible and beneficial development and use of AI.</p></li></ul><p>In theory, the principles should be reflected in new ASU AI products and initiatives, including how they are released and what happens next. And so I was intrigued to see reports appearing on social media and in news outlets which suggested that that something might be amiss.  </p><p>The product in question is <a href="https://atomic.asu.edu/">ASU Atomic</a> &#8212; a subscription service that scrapes ASU&#8217;s catalog of online courses and uses AI to custom-create learning modules based on patched together content, including short segments from instructor videos.</p><p>The idea makes sense on paper &#8212; especially if you buy into the transmission model of education that focuses on optimizing content-transfer, rather than models where learning emerges through experience, dialogue, and reflection.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>  </p><p>ASU Atomic approaches education by breaking down content into its constituent &#8220;atoms&#8221; (hence the name) and then building up &#8212; educational atom by educational atom &#8212; modules that are tailored to what the student is looking for. </p><p>As ASU President Michael Crow described it to the <a href="http://v">Arizona Board of Regents earlier this year</a>, </p><blockquote><p>&#8220;Imagine that we have thousands and thousands and thousands of courses. And you can break these courses down into tens of thousands or hundreds of thousands of sub-component parts. Then you build a program in which you can ask the computer, &#8216;I want to learn about this.&#8217; And then it takes some component of all these different things and then organizes what you need to learn.&#8221;</p></blockquote><p>I first heard of Atomic a little over a week ago as it was being pushed out to a select group of former students. I was intrigued by the concept, but didn&#8217;t have the chance at the time to dig into it. </p><p>Then the news website <a href="https://www.404media.co/asu-atomic-ai-modules-arizona-state-university/">404 Media</a> broke a story highlighting faculty concerns about the app. And other sites followed suit, including the <a href="https://www.chronicle.com/article/a-university-is-scraping-course-materials-for-its-new-ai-platform-it-didnt-ask-the-faculty">Chronicles of Higher Education</a> and <a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/04/29/faculty-concerned-about-asus-new-ai-course">Inside Higher Ed</a>.  </p><p>The issue, it seemed, wasn&#8217;t so much the underlying concept as its execution. And this is what led to me taking a closer look while going back to ASU&#8217;s AI Design Principles, to see how things squared up.</p><p>ASU Atomic was launched as a &#8220;beta&#8221; &#8212; a platform undergoing public testing, where users would normally be given access in exchange for feedback on what&#8217;s working and what is not. Users pay $5 per month (with the first 14 days free), tell the platform&#8217;s AI what they want to learn, and answer several questions that help the AI hone things down. Then, after a few minutes, they are presented with a customized educational module &#8212; complete with slides, a narrator, video clips, and assignments &#8212; which they can work through at their leisure.</p><p>However, there&#8217;s no indication of where the content comes from in the generated modules, how it&#8217;s been validated, what is scraped directly from courses or is AI-generated, and &#8212; for video clips extracted from courses &#8212; which courses they are from, who the instructor in the clips is, what the <em>context</em> of the clip is (essential if it needs to be interpreted within a broader learning context), whether the content is still relevant, whether it was intended for highly contextual or broader public consumption, and how it fits the pedagogy of the AI-generated module.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>On top of this, few if any of the instructors, it seems, were aware that their material was being used before the 404 Media article was published. And there is currently no mechanism on the website for feedback &#8212; either on course content or platform behavior.</p><p>To be clear, if a <em>transmission</em> model of education is assumed to be fit for purpose, the <em>idea</em> of ASU Atomic makes sense &#8212; as long as the creators of the content it draws on are using the same model in how they teach. If they are not though, the chances of mismatches that undermine the value of the AI-generated modules is pretty high.</p><p>This is concerning. But it&#8217;s not what drove the media coverage. What <em>did</em> was the use of course material without consulting faculty first.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>This is, in principle, perfectly allowable under the terms of use of the learning management system ASU uses for online courses, as well as the terms and conditions faculty operate under &#8212; where developed educational material belongs to the university. But, of course, there&#8217;s often a gap between what is <em>legally</em> allowed, and what is good practice for an enterprise and the people who work for it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>And this is where the AI Design Principles come in. </p><p>Measuring ASU Atomic up against these though suggests that something might have been missed in the development and deployment process. </p><p>For instance, it&#8217;s not clear if and how Atomic was designed to amplify possibilities in service of respecting human autonomy. Or whether possible impacts and potential harm were rigorously evaluated before its release. Or how it demonstrates a commitment to a shared responsibility between individuals and the enterprise for the responsible and beneficial development and use of AI.</p><p>Atomic may well have undergone a rigorous internal process that addressed these and the rest of the AI principles, although given this week&#8217;s media coverage, this seems doubtful.</p><p>Ot it may just be, as is so often the case in a large, complex and fast-moving organization, that despite the best of intentions, steps in the process were inadvertently overlooked.</p><p>In either case, the bigger question to me is whether the AI Design Principles <em>could</em> have helped avoid a situation where media coverage and faculty frustrations could potentially undermine ASU&#8217;s use of AI. </p><p>If they could have &#8212; and re-reading them, I think they could &#8212; this suggests that such principles are useful as a tool for aligning AI use with institutional ambitions, while avoiding unnecessary mis-steps.</p><p>But only, of course, if they are actually used.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The full preamble to the Principles can be found in an <a href="https://issuu.com/asu_uto/docs/design_principles_for_beneficial_and_responsible_a">online </a><em><a href="https://issuu.com/asu_uto/docs/design_principles_for_beneficial_and_responsible_a">issuu</a></em><a href="https://issuu.com/asu_uto/docs/design_principles_for_beneficial_and_responsible_a"> document</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This, I have to confess, is not a model of education that I use in my own teaching &#8212; preferring instead to develop learning environments that are more relational and experience-based than purely transactional while still having concrete learning goals. At the same time, there are legitimately different theories of learning that different institutions lean toward.  </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>ASU&#8217;s Alex Halavais has a excellent essay on the &#8220;atomized professor&#8221; and Atomic on his website <a href="https://alex.halavais.net/atomized-prof">a thaumaturgical compendium</a>. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Just as an aside, my work some years ago on <a href="https://riskinnovation.org/">risk innovation</a> was motivated in part by the challenges of introducing complex technologies into an equally complex stakeholder landscape by providing organizations with simple tools for identifying and navigating potential threats to value. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>As anyone in management or leadership knows, trust and good will within an organization have a massive impact on its ability to operate and achieve its goals.  </p></div></div>]]></content:encoded></item><item><title><![CDATA[Why I'm falling out of love with Claude]]></title><description><![CDATA[I hope it's just a phase, but Anthropic, what have you done to my trusted AI writing companion?!]]></description><link>https://www.futureofbeinghuman.com/p/why-im-falling-out-of-love-with-claude</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/why-im-falling-out-of-love-with-claude</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 26 Apr 2026 14:49:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0HMe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0HMe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0HMe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0HMe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2537473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/195377395?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0HMe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0HMe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ea2bdac-19ca-402f-ae1f-c11d9309cc8e_1792x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: MidJourney</figcaption></figure></div><p>I should start out by saying that I strenuously avoid anthropomorphizing AI apps. I don&#8217;t personalize them by giving them names. I always refer to them as &#8220;it.&#8221; And I don&#8217;t have any illusions of them being self-aware or conscious in any meaningful way &#8212; at least, not yet.</p><p>But I do have a relationship with the LLMs I use. And this is built around how it feels to use them, and how using them makes me feel &#8212; much as I do with other technologies I use. </p><p>As a writer, that relationship is deeply intertwined with the character that&#8217;s embedded in how an LLM-based AI communicates with me. And when that character changes, it changes the relationship &#8212; and my ability to work effectively with a platform. </p><p>And this is exactly how I&#8217;ve been feeling about Anthropic&#8217;s Claude this past week.</p><p>A year ago I fell in love with Claude Opus 4.5 &#8212; not in the romantic sense (just to be clear), but in the sense of finding an AI that it excited me to work creatively with, and one that elevated and amplified what I could achieve.</p><p>I was working on the manuscript for <em>AI and the Art of Being Human</em> at the time, and having a hard time with ChatGPT. The book was a very intentional collaboration with AI, and as part of this I needed a writing partner that could capture the texture &#8212; the soul &#8212; of what my co-author Jeff Abbott and I were trying to convey.</p><p>And while technically competent, ChatGPT simply could not do this. It was flat, mechanical, painful to read, and even more painful to work with as a writer. </p><p>Reading what we were co-producing just made me miserable.</p><p>And so I started playing with Claude, and everything changed. Here was a technological writing partner that seemed to be able to reach into my soul as a writer and capture the creative essence and <em>feel</em> of what I was striving for. Claude plus me had more style, more depth, and more emotional resonance (surprisingly) than either of us could achieve alone.</p><p>It was the start of a writing partnership/relationship that&#8217;s lasted the best part of a year.</p><p>Then Opus 4.7 came along, and everything changed.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Gone was my AI soul-mate, and in its place was a soulless machine that seemed incapable of breaking away from interminable AI clich&#233;s in what it produced &#8212; all those turns of phrases, the rhetorical moves, the stylistic patterns, that make some LLMs OK technical writers and appalling communicators.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>What&#8217;s worse, the more I tried to train it on my voice and place stylistic guides and guardrails in place, the worse it got.</p><p>Now, I realize that I may be being a little over-sensitive here. But words, and how they are used, and the meaning and stories they weave, are important to me. And somehow, what had made Opus 4.5 special seemed to have been replaced by a mechanical &#8212; and not so lovable &#8212; stranger.</p><p>In other words, the very thing that had led to me moving to Claude in the first place was now happening to the AI I thought I could trust.</p><p>At this point, I can imagine you rolling your eyes and mumbling something like &#8220;get over yourself,&#8221; or something much stronger! And you&#8217;d be right to. LLMs are, after all, just machines.</p><p>At the same time, I think there&#8217;s something here that goes beyond my over-delicate sensitivities. And that is that how these models behave &#8212; and the relationships we develop with them &#8212; matter. And that they matter beyond the technical competencies that can be measured by doing math, or solving complex problems, or writing code. </p><p>Of course, there are many applications of AI that are purely utilitarian, and where it probably isn&#8217;t important if the underlying LLM undergoes a character change.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> But this is a technology whose transformative power is predicated on how we engage with it <em>relationally</em>. And that means that, for many users,  the character or &#8220;feel&#8221; of an AI <em>is</em> important &#8212; sometimes critically so.</p><p>We know from human relationships &#8212; whether they are human-human, human-animal (or human-plant to be inclusive) or human-machine &#8212; that when the thing or person we are in a relationship with changes, connection and trust become stressed, and sometimes break. With AI, this implies that if you rely on LLMs in your personal or professional life in ways that depend on a relationship that&#8217;s based on <em>how</em> the AI behaves, not just <em>what</em> it does, changes to the model that affect this can be disruptive. And this raises potential issues that I suspect often get glossed over.</p><p>Of course, that we do build relationships with inanimate objects or technological devices is something that most people understand intuitively. We become attached to the vehicles we drive, or a favorite piece of clothing, or a particular type, make, or vintage of phone. But it&#8217;s one thing to be attached to a car or have a favorite sweater that you can&#8217;t bear to part with. It&#8217;s completely different when the thing you have the relationship with can talk back to you, reason with you, share your hopes and fears (or at least do a good impersonation of this), inspire you, and bring you a sense of joy and satisfaction that&#8217;s closer to what we&#8217;re used experiencing to with animals and people rather than a machine.</p><p>Whether this is a big deal or not, I honestly don&#8217;t know. But I suspect it might be &#8212; although one that risks going unnoticed under the dazzle of technical AI fireworks. And if this <em>is</em> the case, it might be worth the big AI companies paying more attention to both <em>character</em> and <em>character constancy</em> as increasingly powerful AI models emerge.</p><p>In the meantime, I&#8217;m stuck with teaching Claude how to be Claude again. And that sucks!</p><h3><strong>Postscript</strong></h3><p>This post is specifically about changes in <em>character</em> when LLM-based AIs are updated or refined. But as I was writing it, stories were emerging around how Claude Code was not behaving as expected in a number of other ways &#8212; so much so that Anthropic <a href="https://www.anthropic.com/engineering/april-23-postmortem">issued an apology wrapped in an explanation</a> a few days ago.</p><p>I&#8217;m not sure how closely my character-based experiences and those associated with functionality and technical competence are related. But it wouldn&#8217;t surprise me if they are.</p><p>In response to the concerns we were seeing, my colleague and podcast co-host Sean Leahy and I sat down to record an episode of <em>Modem Futura</em> focused on Opus 4.7 and Anthropic&#8217;s Mythos a couple of days before I finalized this post. And as part of this, Sean asked Claude for some background and pointers for the episode &#8212; ironically, using Opus 4.7.</p><p>Claude, for reasons of its own, decided to provide this as an html file, and it&#8217;s both informative and meta enough that <a href="https://andrewmaynard.net/p/opus-4-7-mythos-timeline.html">I thought I&#8217;d link to it here</a>.</p><p>I&#8217;ve left the web page exactly as Claude produced it &#8212; including the note from Claude that this is &#8220;Prep only, not for read-aloud&#8221; and the lovely &#8220;not for Andrew to see yet.&#8221; &#128522; </p><p>The timeline is revealing of just how dependent modern LLMs are on updates and tweaks in how they behave. But also, the language Claude uses here is the epitome of everything I&#8217;ve been railing about above &#8212; competent, informed, yet excruciating to read &#8212; at least to me.</p><p>Of course, I could just be developing an allergy to AI-speak. But that&#8217;s probably a story for another day &#8230;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Opus 4.6 came in between, which was where things started going wrong i suspect, but it was with 4.7 that the cracks started to show!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Like I often do, I passed the final draft of this by Claude to catch grammatical errors and other things that slipped my editorial net. It sounded positively offended that I should criticize its prowess as a writer &#8212; which made me glad as it was actually showing some character as it did!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Re-reading this, I&#8217;m not so sure, as LLMs are a relational technology whether we&#8217;re doing math with them, coding, solving the next big scientific problem, or asking what we should make for dinner. And because of this, the ways they use language changes how we respond to them, the degree to which we trust them, and our ability to get the most out of them through the medium of language.</p></div></div>]]></content:encoded></item><item><title><![CDATA[14 essential AI "I can ..." skills every undergrad should have]]></title><description><![CDATA[I was struggling to find a list of practical interview-ready AI skills that every graduating undergraduate should be able to demonstrate. So I created one.]]></description><link>https://www.futureofbeinghuman.com/p/14-essential-ai-i-skills-for-students</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/14-essential-ai-i-skills-for-students</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Tue, 14 Apr 2026 16:22:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4Kia!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4Kia!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4Kia!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4Kia!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1829492,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/194192917?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4Kia!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!4Kia!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a038fab-252a-487c-922b-a4f9540b7c58_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Gemini, with a LOT of re-prompting and post-editing in Photoshop!</figcaption></figure></div><p>If you could list the top AI-related skills every undergraduate needs as they start to look for jobs, what would be on your list? </p><p>Not generic (but still important) competencies like AI literacy or critical thinking, but practical skills where, in an interview, a student can say &#8220;I can do this &#8230;&#8221; and demonstrate it on the spot.</p><p>I was looking for such a list earlier today as I was preparing to speak to a group of students, and was surprised by how little I could find. </p><p>Of course there are the big AI literacy frameworks from places like the <a href="https://ailiteracyframework.org/">EU/OECD</a>. And there are a bunch of resources online that list &#8220;must-have AI skills.&#8221; But these are typically very high level (such as the EU/OECD AI Literacy framework), focus on technical skills (Sarah Moreno&#8217;s <a href="https://ai.plainenglish.io/12-essential-ai-skills-to-master-by-2026-5bdc2acb229c">12 Essential AI Skills to Master by 2026</a> is a good example here), or are too mushy to put on a resume and defend in an interview (Purdue&#8217;s <em><a href="https://www.purdue.edu/newsroom/2025/Q4/purdue-unveils-comprehensive-ai-strategy-trustees-approve-ai-working-competency-graduation-requirement/">AI Working Competency</a></em> approach is extremely useful for instance, but is too high level for students when they&#8217;re put on the spot).</p><p>And so I sat down and wrote my own list.</p><p>This, of course, is not the definitive list &#8212; if nothing else because necessary AI skills are a fast-moving target. </p><p>But from my perspective as an educator working at the edge of emerging tech and the future, it does give students a series of &#8220;I can &#8230;&#8221; skills that they can describe and demonstrate in an interview. And that &#8212; from all I&#8217;m hearing &#8212;are becoming essential for success, irrespective of what your major is or the career you aspire to.</p><p>And so, without further ado, here is my list of 14 essential AI &#8220;I can &#8230;&#8221; skills every undergrad should have:</p><ol><li><p>I can choose the right AI tool or platform for a specific task, explain why, and also explain when and why I might not use it.</p></li><li><p>I can use iterative back-and-forth conversations with AI to brainstorm and explore new ideas while countering its anchoring bias &#8212; the tendency to lock into the initial prompt or question and ignore broader possibilities.</p></li><li><p>I can research a topic with AI tools and independently verify that the sources it points to are real, authoritative, and accurately represented.</p></li><li><p>I can apply fact-checking techniques to AI-generated claims and reasoning to catch hallucinations, subtle errors, and confident-but-wrong outputs.</p></li><li><p>I can critically edit and refine AI-generated content &#8212; both on my own and working with AI &#8212; so it matches my voice, standards, and accuracy.</p></li><li><p>I can build and deploy simple AI agents (e.g., Gemini Gems or Claude Cowork setups) to automate a workflow or assist in problem-solving.</p></li><li><p>I can analyze simple datasets using AI in everyday tools (for instance Copilot in Excel or Gemini in Sheets) while applying data privacy best practices.</p></li><li><p>I can create professional visuals, slides, diagrams, or syntheses using tools like NotebookLM or similar, while recognizing their limitations.</p></li><li><p>I can disclose and attribute AI assistance according to policies and best practices.</p></li><li><p>I can discuss AI&#8217;s possibilities, challenges, biases, limitations, and potential pitfalls, and how they apply to how I use it.</p></li><li><p>I can use AI creatively and imaginatively to open up new possibilities and opportunities.</p></li><li><p>I can explain how I balance curiosity, care, clarity, and intentionality in deciding when and how to use AI.</p></li><li><p>I can identify and use reliable, up-to-date resources for new insights on working effectively with AI while preserving my human strengths.</p></li><li><p>I can use AI to learn how to use AI.</p></li></ol><p>If you have others that aren&#8217;t here, or have skills you think are redundant (especially if you are actively recruiting students), please do add them in the comments.</p><p>And if these are useful, please don&#8217;t hesitate to share and use them.</p><div><hr></div><p><em><strong>AI Use Statement</strong></em></p><p><em>Of course, as I&#8217;m writing about AI use, how could I not include an AI use statement! As you might guess from the fact that there are 14 skills here and not an even 10, this is an idiosyncratically human list (or maybe just an idiosyncratically &#8220;Maynard&#8221; list), and draws directly on my own experiences and observations working with students and listening to employers. But I did use Grok to stress-test the idea and check whether I was just re-inventing the wheel, and to sharpen the list up. I also asked Claude for some editing advice on the final list. And of course I used Gemini to create the hero image &#8212; although that ended up taking more time than it took to conceive, research, draft, and write the whole piece! But apart from that, what you see is what a human produced!</em></p>]]></content:encoded></item><item><title><![CDATA[Ten Questions about AI and Higher Education]]></title><description><![CDATA[The Sorcerer's Apprentice in Disney's Fantasia is over-used when discussing AI and education. Despite this it still illuminates deep questions we still don't have good answers to, and probably should.]]></description><link>https://www.futureofbeinghuman.com/p/ten-questions-about-ai-and-higher</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/ten-questions-about-ai-and-higher</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 11 Apr 2026 19:16:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3c4W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3c4W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3c4W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3c4W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2279471,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/193801558?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3c4W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!3c4W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8ad982-d87f-4f44-9623-fca0e4c20d9d_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Is Mickey the student, the educator, or the educational institution &#8230;? Source: Disney.</figcaption></figure></div><p>Whenever I feel I&#8217;m getting a handle on the intersection between AI and higher education, I find my base assumptions challenged. And this past week has been no exception &#8212; especially with the non-launch of <a href="https://thehill.com/policy/technology/5824219-anthropic-new-ai-dangerous-public/">Anthropic&#8217;s Mythos Preview</a>. </p><p>But it&#8217;s also a week where some serious inspiration has come from an unexpected source. And this led to me sitting down and hashing out ten questions about AI and higher education that I don&#8217;t have good answers to, but I think we should be taking seriously if we believe in truly empowering our students to thrive in the future we&#8217;re building.</p><p>The unexpected source was, intriguingly, Disney&#8217;s 1940 film <em>Fantasia</em>, and in particular the Sorcerer&#8217;s Apprentice scene where Mickey Mouse gets caught out flexing power without understanding.</p><p>I&#8217;m currently working with the film as backdrop for exploring AI from multiple perspectives &#8212; much as I used science fiction movies in the book <em>Films from the Future</em> &#8212; and will be writing more about this at some point. But for this week the story of Mickey reveling in the unearned and undisciplined power of sorcery provided the perfect catalyst to crystallize the questions swirling around my mind.</p><p>Before we get to the ten questions, it&#8217;s worth noting that the Sorcerer&#8217;s Apprentice scene is something of a clich&#233; when it comes to discussing the possible dangers of wielding technological power with limited understanding &#8212; especially when it comes to AI. It&#8217;s almost too easy to project comparisons onto the animation and to then use these as illustrative warnings &#8212; the lure of lazy technology-enabled shortcuts; the seduction of frictionless power; the myopia of starting something without thinking about the consequences.</p><p>But when the scene is engaged with more deeply, it has a surprising power to open up ways of thinking about AI in higher education that are not obvious at first, and that eschew the convenient cautionary tales usually attached to the story.  </p><p>Rather than provide a long explanation of how the scene led to each question and what each reveals about the evolving relationship between AI and higher education, I thought I&#8217;d simply list them as conversation-starters and see where they go.</p><p>But because each one <em>does</em> have a connection at some level to Disney&#8217;s Sorcerer&#8217;s Apprentice, it is worth <a href="https://video.disney.com/watch/sorcerer-s-apprentice-fantasia-4ea9ebc01a74ea59a5867853">watching (or re-watching) the scene first</a>, and thinking about the questions in the light of it:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://video.disney.com/watch/sorcerer-s-apprentice-fantasia-4ea9ebc01a74ea59a5867853" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_llc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 424w, https://substackcdn.com/image/fetch/$s_!_llc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 848w, https://substackcdn.com/image/fetch/$s_!_llc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 1272w, https://substackcdn.com/image/fetch/$s_!_llc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_llc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3530776,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://video.disney.com/watch/sorcerer-s-apprentice-fantasia-4ea9ebc01a74ea59a5867853&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/193801558?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_llc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 424w, https://substackcdn.com/image/fetch/$s_!_llc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 848w, https://substackcdn.com/image/fetch/$s_!_llc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 1272w, https://substackcdn.com/image/fetch/$s_!_llc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86b8a6a0-fbd7-4fac-9221-5b8a35113abd_2262x1696.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Disney</figcaption></figure></div><p>With that, here are ten questions inspired by this sequence that are challenging me at the moment:</p><h2>Ten Questions about AI and Higher Education:</h2><ol><li><p>What does <em>competency</em> mean in an age of AI?</p></li><li><p>What does <em>success</em> mean in an age of AI?   </p></li><li><p>How can we help students avoid the <em>illusion</em> of understanding and ability when using AI?</p></li><li><p>What happens in a future where students become AI masters, and teachers AI apprentices? </p></li><li><p>When AI promises near-frictionless mastery of a subject, what is the <em>value</em> of pursuing mastery without it?  </p></li><li><p>What do we <em>owe</em> our students in an age of AI?</p></li><li><p>What does it mean to model mature AI use?</p></li><li><p>How do we empower our students to navigate the challenges and affordances of AI-enabled efficiency?</p></li><li><p>How do we help our students stretch their imagination in an age of AI, without being subsumed by the technology? </p></li><li><p> How do we provide students with the tools and skills to discover and embrace what it means to be human in an age of AI?</p></li></ol><h2>Postscript</h2><p>Of course, having teased you with the video and the questions, I couldn&#8217;t just leave it there! </p><p>These questions are &#8212; in case you hadn&#8217;t realized yet &#8212; not the usual suspects when it comes to grappling with the intersection between AI and higher education. They don&#8217;t, for instance, focus on things like cheating, AI-proofing assignments, creating syllabi and assignments with generative AI, using bots in class, and the many other challenges and opportunities that exploded onto the scene when OpenAI launched ChatGPT in 2022. </p><p>These are still current topics of conversation in many colleges and universities. But emerging AI capabilities are opening up new possibilities that make questions like these feel increasingly out of touch.</p><p>These include LLMs that can interpret, plan, and carry out tasks while checking their work to avoid errors; platforms like OpenAI&#8217;s Codex and Anthropic&#8217;s Claude Code that can deploy teams of agents to research and complete complex tasks; AI agents that can cross-reference and extract insights from more resources than a single person could possibly handle alone; AI apps that can generate prose that are indistinguishable from humans; AI agents that can take the place of students working on course assignments; systems that can deploy hundreds of AI agents and achieve the equivalent of thousands of hours of human-equivalent work overnight. </p><p>And that&#8217;s just the start. </p><p>In a week where Anthropic decided not to release its latest model to the public because it&#8217;s too powerful, AI systems are beginning to make the idea that AI and education is just about navigating a single prompt window in a web browser feel deeply anachronistic.</p><p>To be clear, I am not talking about artificial general intelligence (AGI) or &#8220;superintelligence&#8221; here &#8212; both of which are rather ill-defined concepts. Rather, I&#8217;m talking about the emergence of transformative technological capabilities that we simply cannot comprehend the full capabilities of, and yet already offer near-frictionless access to power that transcends our understanding.</p><p>These capabilities are deeply intertwined with &#8212; and influential on &#8212; how we think, how we learn, how we develop our understanding of ourselves and the world we live in, and how we imagine and create the futures we aspire to.</p><p>If we aren&#8217;t taking this seriously in higher education &#8212; either as an existential threat or as an opportunity unlike any that we&#8217;ve previously faced &#8212; we risk being swept away in the coming AI tsunami.</p><p>And this brings us back to <em>Fantasia. </em></p><p>In the Sorcerer&#8217;s Apprentice scene, Mickey risks being swept away by the consequences of his naivety. But in a world where transformative AI touches everything we do, ignoring its power may be just as naive as wielding it without understanding. </p><p>Mickey's mistake wasn't only that he embraced sorcery he didn't understand &#8212; it's that he acted with certainty unshaken by his ignorance. Is the modern equivalent acting with equal certainty in the other direction: trusting that traditional mastery alone is enough, while the water keeps rising?</p><p>And there&#8217;s the eleventh, bonus question &#128578; </p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Spoiler Alert: I rebuilt my book for AI!]]></title><description><![CDATA[I've been experimenting with translating my 2018 book Films from the Future into a website designed primarily for AIs. Here's how it went.]]></description><link>https://www.futureofbeinghuman.com/p/spoiler-alert-wtf</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/spoiler-alert-wtf</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Thu, 02 Apr 2026 14:26:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ONHR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://spoileralert.wtf/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ONHR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:229943,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:&quot;https://spoileralert.wtf/&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192870319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ONHR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Update: The complete content of spoileralert.wtf can now be uploaded into NotebookLM &#8212; providing ways to engage with the book&#8217;s content and it&#8217;s relevance to navigating emerging transformative technologies like AI that I could never have dreamed of in 2018. Simply point NotebookLM at https://spoileralert.wtf/llms-full.txt.</em> </p><p>As an author, I write for human readers. As I&#8217;ve noted before though, there&#8217;s a growing trend in AIs being the predominant consumers of the written word, often acting as a translator between source and consumer.</p><p>But if this is the case, why not embrace the trend and write directly for AI?</p><p>The idea intrigues me &#8212; and not only me. There&#8217;s a growing trend in creating AI-first content online. And so I thought I&#8217;d dive in and experiment with rebuilding my book <em>Films from the Future: The Technology and Morality of Sci-Fi Movies</em> as a website designed primarily for AI consumption. </p><p>The choice of book was very intentional. Even though <em>Films from the Future</em> was written in 2018, the underlying concepts, ideas, and observations are, if anything, far more relevant now than they were eight years ago. And as a result I&#8217;ve been thinking about ways to breathe new life into it.</p><p>And given the growing shift toward using AI apps to explore and synthesize information, repackaging it for AI consumption made a lot of sense.</p><p>Plus, it gave me the chance to add new material to the book&#8217;s original content while moving away from a title that only a publisher could love (I was never a fan of <em>Films from the Future</em>).</p><p>The result is the rather cheekily named website <a href="https://spoileralert.wtf/">spoileralert.wtf</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The website built on a foundation of of over 140 markdown files<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> that include the book&#8217;s original content, together with additional material on cross cutting themes, connections to emerging trends and issues, and personal refections from me on everything from the book&#8217;s backstory to movies that did and did not make the cut. But unless you are comfortable reading markdown files online, these are not intended for human consumption.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> </p><p>Rather, they are coordinated through a master AI-legible file &#8212;  llms.txt (building on a standard proposed by <a href="https://www.answer.ai/posts/2024-09-03-llmstxt.html">Jeremy Howard</a>) &#8212; that allows AI platforms to act as a personal guide to the website.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>This is a markedly different approach to simply uploading the book into an AI (assuming you could get hold of the PDF in the first place), or building an AI bot or agent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> </p><p>For one, it allows AI models to navigate and synthesize far more material than is presently possible with either of these approaches. It also means that anyone using the site can decide for themselves which AI platform to use, and how to use it.</p><p>There&#8217;s also an added advantage that, if you are using something like Claude or ChatGPT with memory turned on, the website plus AI become a highly personal guide to exploring emerging technologies and their responsible and beneficial development and use. </p><p>Reflecting the website&#8217;s AI-first design, the human-facing part of <a href="https://spoileralert.wtf">spoileralert.wtf</a> is minimalistic. Apart from a brief introduction and overview, the landing page includes a prompt to cut and paste into an AI of your choice, and that&#8217;s pretty much it:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://spoileralert.wtf/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rV5D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 424w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 848w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1272w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png" width="408" height="697.49377593361" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db40901c-d74c-4ad4-9891-36629f078e37_964x1648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1648,&quot;width&quot;:964,&quot;resizeWidth&quot;:408,&quot;bytes&quot;:269366,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://spoileralert.wtf/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192870319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rV5D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 424w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 848w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1272w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8230; although, being a writer, I couldn&#8217;t resist adding a little more stuff below the sign off!</p><p>At this point, this is still an experiment in AI-focused publishing. But as more and more people rely on AI apps rather than direct sources for information, I suspect that it&#8217;s a direction that&#8217;s likely to become increasingly important.</p><p>With that, please do try it out and let me know how you get on: <a href="https://spoileralert.wtf/">spoileralert.wtf</a>. </p><p>And if you&#8217;re interested in more information on the technical details and the experience of building the website, read on &#8230;</p><div><hr></div><h1>The Below the Fold stuff</h1><h3>The website architecture</h3><p>As I mentioned above, the website is built around a comprehensive llms.txt file that gives any AI that&#8217;s pointed toward it a clear map of what the site includes and where to look for specific content. If you&#8217;re interested you can read the llms.txt file <a href="https://spoileralert.wtf/llms.txt">here</a> &#8212; it&#8217;s a markdown file and so easier to read by downloading and opening in a markdown editor.</p><p>This file describes the site&#8217;s architecture and links to 127 markdown files that provide guidance on interpreting and engaging with the book and website content, as well as allowing the AI access to the full text of the original book.</p><p>Within these files, six top-level domain guides cover:</p><ul><li><p>Emerging science and technology</p></li><li><p>Responsible and ethical innovation</p></li><li><p>Navigating the future</p></li><li><p>The twelve movies that the book draws on</p></li><li><p>Post-2018 developments, and </p></li><li><p>Complex emerging questions</p></li></ul><p>These provide sufficient context allow the AI to navigate book content and associated material according to specific themes and areas.</p><p>Each domain file then links to a number of specific topic files &#8212; over ninety of them. These were identified and fleshed out working with Claude Code, and create a thematic guide to the book that cuts across chapters and issues.</p><p>Finally, there are six supporting files that cover everything from discussion questions and my original movie shortlist, to an educator&#8217;s guide, and even some previously unpublished book trivia.</p><p>If you&#8217;re interested, the full site structure can be explored through the <a href="https://spoileralert.wtf/contents.html">Contents</a> web page.</p><h3>The process</h3><p>The complete website was built while working closely with Claude Code. While I had a very strong conceptual and editorial steer, Claude Code was pivotal in helping translate this into reality. Claude Code  helped develop the site&#8217;s architecture, drafted content files, generated html code, and helped debug/refine what ended up being a deeply integrated and interconnected set of resources.</p><p>In all there are nearly 400 files associated with the site, as many of the markdown files have associated html files. And all need to be cross-linked and cross-referenced. Both the magnitude of a project like this, and the complexity of tracking hundreds of links, would have made this a near-impossible task for me to take on unaided.</p><p>Similarly, Claude Code could never have generated the website without my input and steer &#8212; the feel, functionality and purpose of the site, as well as the type of content, all come from me. And very intentionally, the site incorporates my voice, tone, insights, perspectives, and sensibilities, in ways that were only possible through working collaboratively with Claude Code.</p><p>Through all of this, Claude Code was a joy to work with &#8212; especially when adding new files that requited deep integration over hundreds of documents! The barrier to entry on a project like this is remarkably low, and the ability to talk through ideas, plans, and implementation as if talking to a colleague or co-worker was, for me, a game changer.</p><h3>The extra bits</h3><p>One of my hopes with this exercise was being able to add substantial value to the original book by making the content more relevant than ever to the present day. I also wanted the chance to add further content that users could not get anywhere else. As a result, if you assess content by file count, the original book constitutes less than 10% of the rebuilt version.</p><p>I won&#8217;t give too much away here as it&#8217;ll spoil the joy of discovery as you explore the site through your AI of choice. But there is information embedded in the site&#8217;s files on the backstory to the book that I haven&#8217;t shared before, details of films I considered for the book but that never made it, commentary from Claude on what I left out, and much more.</p><p>There are also a series of <a href="https://spoileralert.wtf/example-conversations.html">conversations</a> on the site between simulated users and Claude. I added these as I found I was far too close to the material to get a clear sense of whether the website was in any way useful. And so I asked Claude to generate a number of user profiles, and then tasked Claude Code to simulate conversations between these and an AI primed with the website&#8217;s prompt.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>They are a little &#8220;AI&#8221; in places I must admit. But they are also a great way to get a handle on how this idea of an AI-legible &#8220;living book&#8221; works. And they are a lot of fun to read!</p><p>Also, for the tech geeks, all the files are available to explore and dive into on <a href="https://github.com/2020science/spoileralert-wtf">Github</a>.</p><h3>What&#8217;s working well, and what&#8217;s  not</h3><p>As I&#8217;ve noted above and in the footnotes, it&#8217;s tempting to consider this exercise as simply a glorified version of giving an AI a copy of the book and asking about it (like you might do in NotebookLM for instance), or building an AI agent/bot around it.</p><p>But spoileralert.wtf is very different from either of these. </p><p>And this makes it intriguing, ground-breaking,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> and sometimes just a little frustrating.</p><p>Unlike using the book through RAG, or developing a bot like a Gem (Gemini) or GPT (ChatGPT), the llms.txt-based approach allows an AI to navigate through a vast corpus of material, and to draw on connections that would otherwise be hard to make.</p><p>It also allowed me to architect the experience at a level of nuance and sophistication that would have been out of my control with a bot/agent, or by simply letting people upload the book to an AI platform themselves.</p><p>And this is the beauty of using an llms.txt file as a guide for AIs to navigate websites that are designed specifically for them. In this case, it enables an LLM-based AI to leverage a map/hub/spoke/web model that is designed specifically for how it consumes and utilizes content.</p><p>But there are issues with this approach.  Not least is the challenge that, at present, most AI models do not recognize llms.txt files by default. And this is why, in the current configuration, the copy and paste prompt includes specific instructions to read the file. </p><p>Then there are the AI platforms themselves. It turns out that some models are currently not advanced enough to engage fully with the material, or are simply not designed for this type of content.</p><p>For instance, there are still AI systems (including those that Microsoft uses) that use indexing by Bing to access web content (yes, you read that correctly). And so anything not indexed by Bing is essentially invisible to them.</p><p>And, it turns out, Bing refuses to index anything with the domain wtf. Who would have guessed!</p><p>Gemini has a similar issue &#8212; not with the domain, but with page indexing on Google. As a result, until a site is fully indexed by Google, parts of it will remain invisible to Gemini. And to complicate things, Google does not seem to like indexing markdown files.</p><p>To get around this, every markdown file on the website has a parallel html file. There&#8217;s also a parallel llms-html.txt index that provides the key to using them &#8212; along with instructions in llms.txt to use this as a backup if issues are hit retrieving markdown content.</p><p>This, I was pleased to see, works surprisingly well, with Gemini (and even Claude at times) switching to html content if the markdown files are being troublesome. </p><p>With this, here&#8217;s where things stand as of writing with different models:</p><ul><li><p>Claude Opus 4.6 (Extended thinking): works very well indeed.</p></li><li><p>Claude Sonnet 4.6: Also works well.</p></li><li><p>Gemini (Pro): Somewhat flakey, but possibly because it&#8217;s relying on files that have been indexed by Google (not all of them yet). And it&#8217;s not great with markdown files. This will hopefully improve over time.</p></li><li><p>ChatGPT: Good when it&#8217;s working well, but unreliable!</p></li><li><p>Grok (Expert): Pretty good.</p></li><li><p>DeepSeek (DeepThink): Enjoying the pants on fire hallucinations here.</p></li><li><p>Perplexity: Not really functional  at all &#8212; at least with the free version.</p></li></ul><p>The bottom line seems to be that most platforms will provide useful but superficial insights around the book using the website, but the simper ones (and DeepSeek) are prone to missing stuff, not digging deep enough, veering off toward other sources, or simply making things up.</p><p>The more powerful the model, the larger the context window, and the more it utilizes reasoning/thinking modes, the better it is &#8212; with Claude far outstripping the rest.</p><h3>And a final word</h3><p>I have no idea whether anyone will find this exercise useful or interesting &#8212; and so love feedback in the comments below.</p><p>I do know that there&#8217;s content in the 2018 book that is deeply relevant to this moment in time.  And that this is buried in a book that very few people will read because a) it&#8217;s a book, b) it&#8217;s printed on paper (unless you have the Kindle version or audiobook of course), and c) it&#8217;s more than six minutes old (at least, it feels like this is the current attention-lifetime for new material).</p><p>And because of this, I feel quite strongly that new ways of making that content accessible and relevant should be explored.</p><p>The approach here of creating content intended for AI seems like a potentially interesting way forward, as it makes the book far more useful to someone using AI than it would otherwise be.</p><p>More personally though, this whole exercise has given me the opportunity to revisit the content and the thinking behind the book, as well as flexing my creative muscles while having some fun along the way. </p><p>And there&#8217;s been something quite generative working with Claude Code on the additional material &#8212; including stuff  that I&#8217;ve never written about before.</p><p>But, of course, to find that, you&#8217;ll have to try the <a href="https://spoileralert.wtf/">spoileralert.wtf</a> prompt out for yourself and see where it takes you &#128513;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There are, not surprisingly, many layers to why I chose this particular URL. Do find out more though, you&#8217;ll have to point your AI to it and ask it why!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Markdown files are becoming the de facto standard for content written for AI consumption. And compared to regular web pages they offer a lot of advantages, including eliminating an awful lot of formatting and contextual content that is irrelevant to an AI, but eats up tokens anyway.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>A bit of a spoiler alert here: If you are desperate to read the AI-intended content and are put off by the markdown formatting on the screen, you can access web-formatted versions from here: <a href="https://spoileralert.wtf/browse.html">https://spoileralert.wtf/browse.html</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I&#8217;m not sure there are any AI platforms that actively use the llms.txt protocol at the moment &#8212; which is a shame as the idea is that when an LLM visits a website the first thing it does is read llms.txt to allow it to navigate and access the content as an AI and not a human. But there&#8217;s nothing like a good bit of future-proofing!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Of course, there will be readers who are adamant that everything here could be replicated by uploading files into ChatGPT or NotebookLM, or creating an AI bot or agent. But trust me on this, the llm.txt plus an integrated markdown file architecture is a fundamentally different approach to making content AI-navigable while not locking in to a specific platform. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Claude Code was instructed to create two agents with a firewall between them &#8212; one representing a user, and one representing Claude primed with the spoileralert prompt &#8212; and then simulate a back and forth between them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>There are people who are building AI-based extensions to books, and AI-legible versions of books in uploadable files (similar to our work with <em><a href="https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai">AI and the Art of Being Human</a></em>). But I&#8217;ve struggled to find anyone currently using an llm.txt-markdown architecture in the same way that it&#8217;s being used here.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can AI create a comprehensive degree program proposal in the time it takes to grab a coffee?]]></title><description><![CDATA[What started as an idle question got me thinking about how artificial intelligence stands to upend how we best-serve students in higher education]]></description><link>https://www.futureofbeinghuman.com/p/can-ai-create-an-undergraduate-degree-plan</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/can-ai-create-an-undergraduate-degree-plan</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 29 Mar 2026 15:13:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-Pp3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Pp3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6614409,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192455475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>If emerging AI systems can design, refine, and help deliver, undergraduate degree programs that are far better designed and far more effective than those created by faculty committees alone, do we owe it to future students to ditch tradition in favor of emerging capabilities?</p><p>This, I must confess, is not the question I started out with as I began working on this article. But it&#8217;s one that I&#8217;m finding it hard to let go of, having spent the past couple of days working with Claude Code on designing a comprehensive plan for a new degree program.</p><p>To be honest, I didn&#8217;t even set out to design a new degree. It was just an idle exercise in seeing what&#8217;s possible with the latest wave of agent-based AI platforms. But having asked Claude Code the question, I&#8217;m finding it increasingly difficult to ignore the result.</p><p>Before I bury the lede any further, you can open/download the complete two hundred and twenty three page degree program proposal that Claude Code produced below &#8212; complete with design philosophy, program architecture, learning outcomes, career pathways, value propositions to students, parents, and employers, and detailed syllabi for 18 core courses and a capstone: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/files/TSI-Program-Proposal-v5.pdf" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!liHZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 424w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 848w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1272w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png" width="487" height="610.1335227272727" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1764,&quot;width&quot;:1408,&quot;resizeWidth&quot;:487,&quot;bytes&quot;:55522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/files/TSI-Program-Proposal-v5.pdf&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192455475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!liHZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 424w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 848w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1272w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Click on the image to open/download the proposal</em></figcaption></figure></div><p>As you read it, take it from me as someone who does this for a living, this is impressive.</p><p>The real story here though is what I learned working with Claude Code on this, and why it got me thinking more deeply than I expected on what we owe our students.</p><h2>Setting the scene</h2><p>I&#8217;ve been using the desktop version of Claude Code for a few weeks now, and I&#8217;ve been impressed by its ability to break projects down into sequential tasks, assign multiple agents to these tasks, and orchestrate them toward delivering a final product.</p><p>For anyone who&#8217;s use the latest iteration of Claude Code, OpenAI&#8217;s Codex, or similar systems, you&#8217;ll know already that these are nothing like your average browser-based AI or AI bot. Here, I must confess I&#8217;m a bit of a novice user compared to some of my grad students. But I&#8217;ve intentionally kept things that way as I&#8217;m interested in seeing what users with little time or patience for engaging with technical wizardry can achieve with easily accessible AI platforms.</p><p>With this in mind, I&#8217;ve been exploring different ways of leveraging Claude Code as something of a side project.</p><p>The desktop version of Claude Code allows you to engage with it via a text box, much like Claude in a web browser. But there the similarities end. Claude Code (and similar systems) can read and write files to your computer, draw on &#8220;skills&#8221; that allow them to achieve an increasingly wide array of tasks, plan strategic approaches to exploring and executing projects, launch and coordinate multiple AI agents to carry out specific tasks, write and execute code as they do this, and much more. </p><p>As a result, they are far more powerful at executing complex tasks than a simple browser-based AI or a single AI bot or agent.</p><p>They also, it has to be said, only represent the tip of the iceberg when it comes to what more advanced agent-based AI implementations are capable of. Which means that everything I&#8217;m writing about here is just the tip of the iceberg of what a more sophisticated agent-based approach can achieve in the right hands.</p><h2>The Process</h2><p>Against this background, a couple of days ago I somewhat idly asked Claude Code to develop a &#8220;mature plan for a new undergraduate degree&#8221; (I&#8217;ve included the complete prompt below).</p><p>This wasn&#8217;t completely out of the blue. A few years ago I looked into the idea of developing a degree program around navigating advanced technology transitions, and  started to flesh out some ideas. But I never got much further than jotting down a few thoughts and concepts. </p><p>This was well before the current emergence of easy-access LLM-based AI systems though.</p><p>Revisiting this, I was curious just how far &#8212; and how fast &#8212; Claude Code could take the idea and run with it.</p><p>And so I opened a new project, and asked the following (bad grammar and all &#8212; and I&#8217;ve only just spotted &#8220;curse-specific learning objectives!&#8221;):</p><blockquote><p>I want you to develop a mature plan for a new undergraduate degree. This is a complex task that will require multiple steps. I have given you some of them below (as well as context), but you will have to research what the elements of a very strong undergraduate program are and build the program around these.</p><p>Context and guides:</p><p>The undergraduate degree should be a 4 year degree at a research university - assume it is a venture between engineering and a business school that also intersects with arts and humanities</p><p>It should be grounded in pedagogical and learning/education design best practices, including having clear, outcomes-aligned and assessable learning objectives and skills development</p><p>It should go deep on core courses, including detailed syllabi and and curse-specific learning objectives/skills</p><p>It should leave placeholders for electives - maybe categories</p><p>It should align very closely and realistically with career opportunities and pathways.</p><p>And above all it should be grounded in deep research so that the program is implementable and not just a paper exercise</p><p>As an area of focus, start with the idea of navigating technology transitions in a technologically complex world and hone this to something that is likely appeal to prospective students, parents, and employers, provide sufficient content, quality and heft over 4 years of study, and differentiate itself from the competition.</p><p>This is a very large task that will require a detailed and multi layered plan and multiple agents.</p><p>Any questions?</p></blockquote><p>After a couple of clarifying questions on process, Claude Code got stuck in, and started down the process that led to the document above.</p><p>The process itself was pretty straight forward:</p><ol><li><p>The prompt (above) followed by four clarifying questions</p></li><li><p>Me grabbing a coffee (that wasn&#8217;t just rhetoric in the title) &#8212; around 20 minutes to get to the nearest campus Starbucks and back</p></li><li><p>Claude conducting initial research, launching 5 agents, and delivering the first version of the proposal &#8212; captured across 23 markdown files and 48,000 words after undergoing several self-initiated internal reviews and edits. (As I was not being that sophisticated, this took a little longer than 20 minutes as I had to manually grant access to my laptop for various operations, but the actual work time was well within the coffee run window.)    </p></li><li><p>Me launching a second project in Claude Code and asking it to conduct four detailed reviews of the first draft from the following perspectives:</p><ol><li><p>Academic/pedagogical &#8212; does the degree program hold together pedagogically</p></li><li><p>Prospective employers</p></li><li><p>Prospective students</p></li><li><p>The parents of prospective students</p></li></ol></li><li><p>Me providing the initial Claude Code project with the reviews and asking for an updated proposal.</p></li><li><p>Me asking Claude Code to package the revised version as a MS Word document (I was still finishing my coffee at this point)</p></li><li><p>Me providing feedback to Claude Code on formatting and key components of the content and focus, and asking for an updated draft (there were a couple of iterations here).</p></li><li><p>Me going through the final proposal manually and refining the formatting while making the occasional edit.</p></li></ol><p>Far and away the longest part of this process was my editing &#8212; something like a 10:1 ratio of my time to Claude Code&#8217;s.</p><p>At the end of the process I asked Claude Code for a session audit. The final markdown files before their translation to a Word doc and my final editing contained over 66,000 words spread over around 215 pages, were the culmination of 334 files generated by Claude Code, and were the product of 7 autonomously designed and deployed sub-agents addressing curriculum structure analysis, external research, change audits, internal consistency (2 sub-agents), codebase exploration, and research synthesis. Across the project, Claude Code called on specific tools over 300 times to complete the tasks it assigned itself.</p><p>This represents a level of complexity and orchestration that no web-based LLM or single-agent chatbot could get close to. And the actual time Claude Code spent on this was just an hour or so tops.</p><p>But was the proposal any good, or did I just end up with 200+ pages of AI slop?</p><p>I&#8217;m sure some readers will disagree with me here &#8212; on principle if for no other reason &#8212; but based on well over a decade of teaching, developing courses and programs, and academic leadership in higher education, I am comfortable saying that the resulting document, while far from perfect, far surpasses most degree-planning documents I have seen emerge from more conventional processes..</p><h2>So what does this all mean?</h2><p>In some ways this was a relatively straight forward task for a multi agent LLM-based AI. Effective undergraduate degree program development isn&#8217;t rocket science. but it is hard work. And it does require knowledge of pedagogical and program design best practices, a good handle on domain knowledge and how to integrate across domains while tying this to learning process and outcomes, a professional understanding of degree-to-careers pathways, and the ability to expertly and simultaneously coordinate research, development, and drafting, across multiple dimensions.</p><p>These are challenging for groups of educators to achieve &#8212; especially if they represent just one of many responsibilities they are juggling. And they are well beyond the capabilities of single AI agents and bots (although these are adept at producing content that looks good, but is not).  </p><p>But they are, in principle, relatively straight forward for multi-agent AI systems.</p><p>And the resulting proposal supports this.</p><p>If you haven&#8217;t looked at it (and I would encourage you to do so), the proposal builds on sound design and learning principles to deliver a deeply integrated program that has all the hallmarks of providing students with successful career pathways. It even includes information on how to market to prospective students, parents, and employers, while addressing how students might position themselves to demonstrate their new skills and abilities. </p><p>Of course, it&#8217;s not perfect. With time (and I had to resist working on it further) I would want to go through a few more iterations to further develop/refine the program. Some of the syllabi definitely need some work. And this is a blank-sheet proposal that assumes any university where it is implemented will build it up from scratch with the necessary faculty and staff &#8212; a luxury that few universities have.</p><p>That said, as a starting point, it is very good indeed. And this is where it got me thinking about what we owe our students.</p><h2>What we owe our students</h2><p>Here, I must confess that worry that many existing undergraduate degree programs are not as good as they could be &#8212; especially at research-focused universities, where something more akin to a &#8220;trickle down&#8221; model of education from world experts in their fields to young, open minds has been adopted.</p><p>This, of course, is a gross over-statement, and many universities &#8212; my own included &#8212; take education very seriously. And yet the reality is that many faculty are thrust in front of student with no training on how to teach, placed on degree committees with no knowledge of program design best practices, and charged with creating career pathways having never had a career outside of academia.  </p><p>The result is a system that is functional but not necessarily optimal. </p><p>What, though, if using agent-based AI systems could help address these shortfalls. Maybe by producing robust drafts that reflect best practices. Or ensuring clear programmatic through-lines from entry to career. Or addressing value propositions to multiple stakeholders. And, of course, helping ensure student success comes before academic hubris.</p><p>These are all options that don&#8217;t replace humans, but rather vastly enhance their professional capabilities. And here it&#8217;s worth noting that the Claude Code generated proposal above isn&#8217;t good because Claude Code in isolation knew what a great degree program looks like, but because I was able to provide expert direction, feedback and evaluation along the way.  </p><p>Even if such capabilities are just used to increase the quality of program development, surely we owe this to our students. Otherwise I worry that we risk selling them something that is far inferior to what it could be &#8212; remembering that this is a life-investment for many students who can barely afford it (or not in some cases) &#8212; while kidding ourselves that they are benefitting from our tricked-down wisdom.</p><p>Of course, we could just fall back on arguments around the sanctity of human intellectual labor and the inviolable standing of academics. But at some point students are going to start voting with their feet.</p><p>And given the choice between a human-made degree that seems to go no-where and an AI augmented one that does, I suspect I know which way they&#8217;ll lean.</p><p>But even before we get there, I&#8217;d like to think that we owe it to them to put their success before our own traditions and egos, and utilize emerging capabilities in ways that provide them with the education, experiences and insights that will enable them to thrive.</p><p>And of course, if we can achieve much of this in the time it takes to grab a coffee, so much the better!</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Are you an AI Apocaloptimist?]]></title><description><![CDATA[The much-anticipated documentary "The AI Doc : Or How I Became An Apocaloptimist" hits US cinemas this week. I attended an early screening while in Copenhagen last week. Was it worth it?]]></description><link>https://www.futureofbeinghuman.com/p/are-you-an-ai-apocaloptimist</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/are-you-an-ai-apocaloptimist</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 22 Mar 2026 16:23:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ltg6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ltg6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 424w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 848w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9604251,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/191663535?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 424w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 848w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This coming Thursday, <a href="https://tfip.org/film/the-ai-doc-how-i-became-an-apocaloptimist/">a new documentary</a> grappling with the threats and opportunities of AI from directors Daniel Roher and Charlie Tyrell opens in US cinemas. </p><p>There&#8217;s already a buzz growing around <em>The AI Doc : Or How I Became An Apocaloptimist,</em> with some seeing it as one of the more balanced and insightful AI films to come out in some time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> And, it has to be said, the framing of the film (existential angst around bringing a kid into a crazy AI world) and a A-list cast of interviewees &#8212; from Sam Altman and Demis Hassabis to Karen Hao and Tristan Harris &#8212; tick a lot of boxes.    </p><p>But does it hit the spot?</p><p>I was in Copenhagen this past week to give a keynote at <em><a href="https://www.thesummit.dk/">The Summit</a></em><a href="https://www.thesummit.dk/"> </a>&#8212; a gathering of Nordic leaders, innovators and organizations co-hosted by the Confederation of Danish Industry and the <a href="https://www.cifs.dk/">Copenhagen Institute for Futures Studies</a>. And had the unexpected opportunity to attend a screening of the documentary ahead of its US release as part of the Copenhagen International Documentary Film Festival (<a href="https://cphdox.dk/">CPH:DOX</a>). </p><p>And so, cold Danish beer in hand, and in a packed theater surrounded by avid documentary fans, I did.</p><p>And I had a very enjoyable evening &#8212; topped off by a question and answer session with three of the the documentary&#8217;s producers, Shane Boris, Diane Becker and Ted Tremper.</p><p>The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times of worst of times for him and wife ( film maker) Caroline Lindy) to bring a kid into the world.</p><p>The setup works well as we share Daniel&#8217;s highs and lows as he brings person after person into his &#8220;makeshift&#8221; studio, and grills them about AI. And it&#8217;s hard not to be impressed by the people the crew managed to persuade to talk with him. There are even a couple of quite delicious non-appearances by Elon Musk and Mark Zuckerberg!</p><p>From a film-making perspective the pacing, the narrative, the emotional roller coaster of a journey, all hit the spot. It&#8217;s an accomplished piece of documentary making that also packs a punch. And for me, watching it in that packed theater was the perfect end to a great week spent talking with interesting people about tech and the future. </p><p>In other words, I&#8217;d definitely recommend heading out to see it &#8212; even without the cold beer and a crowd of Danish documentary enthusiasts.  </p><p>Having said that, this is a recommendation that comes with some caveats.</p><p>Like all documentaries, <em>The AI Doc</em> sets out to tell a specific story in a particular way. And here, it achieves what the directors and producers set out to do very effectively. It&#8217;s touching, funny, shocking, and thought provoking. </p><p>But this isn&#8217;t the nuanced story about AI that I would tell, given the chance. (and, of course, this is probably why I haven&#8217;t been given the chance!)</p><p>Despite its impressive cast of characters, the documentary&#8217;s missing &#8212; at least from my own work and perspective &#8212; a huge swath of expert insights around responsible, ethical, and safe AI. The people who are interviewed grab the attention, and theres no question that they make for a riveting documentary. But there&#8217;s a point where I found myself feeling that I was being drowned in opinions that were only loosely tethered to reality &#8212; whether from the techno-doomers or techno-optimists being interviewed.</p><p>And the documentary is most definitely light on some of the more nuanced challenges around responsible development and use of AI, from the risk of weakened infrastructure and the dangers of premature adoption, to growing concerns around impacts of AI on behavior and wellbeing.</p><p>However, it was not my documentary &#8212; than goodness, as no-one would come to watch it if it was! And there is something rather churlish about reviews that overlook what <em>has</em> been achieved and, instead, focus on what they think has not. </p><p>And so I thought I&#8217;d wrap this piece up with five ways of appreciating and enjoying the documentary while also digging deeper into navigating opportunities and challenges associated with increasingly powerful AI capabilities.   </p><p>First off, go watch the documentary &#8212; it&#8217;s well made, entertaining, and thought provoking. And as you watch it, enjoy it, think about it, talk about it, explore how it intersects with your experiences and your perspective. As a conversation starter, the film definitely achieves what the directors and producers set out to do.</p><p>Second, remember while watching it that this represents a substantial and very intentional creative project, and the team behind it did an amazing job &#8212; even more so given how fast the AI landscape was changing as they were making it. In the Q&amp;A session in the screening I attended, producer Ted Tremper noted that they nearly had an existential crisis of their own when Sam Altman was removed as CEO of OpenAI in 2023, and then reinstated 72 hours later &#8212; an apt reflection of the whole AI roller coaster they were trying to capture.</p><p>Third, remember that, if you are not deep in the weeds of AI, the complexities of the technology&#8217;s potential impacts on society and the future are near-impossible to capture in a documentary that people will actually watch. And in this context, the narrative choices the team made make far more sense than they might otherwise do. As Ted Tremper said in the screening&#8217;s Q&amp;A in Copenhagen, the documentary needed to act as a &#8220;first date&#8221; with the audience &#8212; revealing enough to invite a continuing conversation rather than killing the relationship with too much information - or just being a jerk!</p><p>Fourth, lean into the &#8220;first date&#8221; analogy and use the documentary as a jumping off point for taking things further &#8212; not as a definitive guide, but as a catalyst for further exploration. I would, of course, strongly recommend reading <em><a href="https://www.aiandtheartofbeinghuman.com/">AI and the Art of Being Human</a></em> as the perfect next-date, but I&#8217;m sure others will have other suggestions.</p><p>And finally, enjoy the story telling for what it is. Not as a lecture on the absolute truth about AI, but as an entry point to thinking further about a technology that will have a profound impact on our lives, whether that&#8217;s apocalyptic, optimistic, or &#8212; most likely&#8212; something way more nuanced in between these extremes.</p><p>And when you have seen it, let me and others know what you think in the comments!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I do feel there&#8217;s a missed opportunity here for a new word &#8212; AIpocaloptimist &#8212; that uniquely captures the angst around extreme uncertainties and outcomes seemingly associated with AI futures. Am I an AIpocaloptimist? I&#8217;m not sure, but I may need to add it to my bio anyway &#8230;</p></div></div>]]></content:encoded></item><item><title><![CDATA[The future has never been this much fun!]]></title><description><![CDATA[If you ever find yourself desperately seeking nuanced perspectives on tech and the future that make you think while bringing a smile to your face &#8212; and need a break from AI slop &#8212; this may be for you!]]></description><link>https://www.futureofbeinghuman.com/p/the-future-has-never-been-this-much</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/the-future-has-never-been-this-much</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 15 Mar 2026 13:44:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yU9B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yU9B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yU9B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1165996,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/190739240?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yU9B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Gemini</figcaption></figure></div><p>One of the many challenges of making sense of today&#8217;s technologically complex and fast-paced world is the preponderance of loud voices telling you what to think &#8212; whether they&#8217;re pushing visions of a tech utopia or impending apocalypse. </p><p>These voices get traction because simple ideas ideas repeated often work &#8212; especially when they seem to reinforce what your gut tells you is true. But this does mean that more nuanced perspectives and voices often get drowned out. </p><p>Which is a problem where thriving in a complex future is all about nuance.</p><p>I&#8217;d like to think that we bring some of that nuance to thinking about tech and the future in the <em>Modem Futura</em> podcast. Plus, in a world that&#8217;s increasingly hungry for human authenticity, we have that in spades!</p><p>And so this week, rather than write a long post, I thought I&#8217;d simply embed our latest podcast episode and let you decide for yourself.</p><h3><strong>The Futures Cone: Preposterous to Plausible<br>Episode 74 of Modem Futura:</strong></h3><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;b1e08c2a-af22-4922-8daa-7a3d7ae312f6&quot;,&quot;duration&quot;:3852.8523,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>No intro, no explanation, no exposition.</p><p>Just an suggestion that, if you are desperate for content that invites you to think in creative ways about the future, that opens up possibilities rather that closes them down, that makes you smile &#8212; or even laugh out loud, that is <em>not</em> AI generated, and that makes you feel a little smarter and your day a little better, you might want to check us out:</p><p>And if you like what you hear here,  we&#8217;d love you to join us on our journey to explore the intersection between tech, society and the future with a good dose of humor each week. (You can find is wherever you get your podcasts including <a href="https://podcasts.apple.com/us/podcast/modem-futura/id1771688480">Apple</a>, <a href="https://open.spotify.com/show/3eFl4hY4t1qTCWE2Bxotrg">Spotify</a> and <a href="https://www.youtube.com/@ModemFutura/videos">YouTube</a>).</p><p>Cheers!</p><p>Andrew</p><p></p>]]></content:encoded></item><item><title><![CDATA[Is AI reducing you to a LinkedIn stereotype?]]></title><description><![CDATA[After playing around with Claude this week, I'm worried that LLMs are stripping us of all those idiosyncrasies that make us interesting as people. Are we all being "LinkedInified" by our AI creations?]]></description><link>https://www.futureofbeinghuman.com/p/ai-linkedinification</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/ai-linkedinification</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 08 Mar 2026 15:29:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UMz1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UMz1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UMz1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 424w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 848w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1272w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" width="1365" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1365,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1404470,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/190236256?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UMz1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 424w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 848w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1272w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: From Apple&#8217;s iconic 1984 Superbowl ad, with a dash of help from Nano Banana 2!</figcaption></figure></div><p>Ask an LLM-based AI to profile someone who has an online presence, and I&#8217;d put money on you getting a perfectly adequate LinkedIn-style summary that as boring as mud. Fine for a cookie cutter professional profile, but utterly devoid of anything that reflects who the person <em>really</em> is.</p><p>Actually, forget the money bit, as this guarantees a slew of people proving me wrong and demanding payment! But despite this, the reality is that LLMs are trained to respond in specific ways to certain types of questions&#8212;in this case, keeping the profile within wha it considers to be professional norms. And as they do, they reflect baked-in biases that are often hidden in their honey-tongued prose.</p><p>This is not new news of course. But I wonder how many of us realize just how much this ends up compressing the amazing, wonderful richness of real people into sea of turgid grayness.</p><p>Or, much more seriously, how much it ends up squeezing the sheer diversity of human identity into a few narrowly defined and, if I&#8217;m being honest, rather conventional categories.</p><p>I was reminded of this quite rudely this past week as I was playing around with an admittedly trivial experiment while using Anthropic&#8217;s Claude.</p><p>I was updating my personal website, and wanted to add AI-readable information that wasn&#8217;t visible to human browsers&#8212;the idea being that an AI ingests and uses web-based information differently to people.</p><p>It&#8217;s something that a growing number of people are playing with. For instance, there&#8217;s the whole concept proposed by Jeremy Howard of adding information in a <a href="https://llmstxt.org/">LLMs.txt file</a> that&#8217;s exclusively designed for AI consumption, just as information in robots.txt is designed for web crawlers.</p><p>Unfortunately, most AI apps don&#8217;t actively look for a LLMs.txt file yet, and so I had to revert to placing human-invisible but AI-readable text on the website.</p><p>And this is where things got interesting.</p><p>To test this out, I added AI-visible text to <a href="https://andrewmaynard.net/">andrewmaynard.net</a> that included honest, but most definitely not conventional, information about my approach to my work and life. The idea was that, if this worked, asking something like Claude to create a profile of me based on the website would include this information.</p><p>To my surprise (and I may have been a little naive here) Claude completely ignored the new information and provided a super-boring LinkedIn-style profile.</p><p>And not just Claude. Nearly every model I tried responded in a similar way. No matter how many times I tried, all I got back was boring Andrew.</p><p>Of course, I could have forced the issue with right prompt. But that wasn&#8217;t the point.</p><p>The exercise&#8212;trivial as it is&#8212;revealed something that is deeply embedded in LLM-based AI&#8217;s. And that&#8217;s their tendency to fit responses to well worn conventions; in this case, squeezing someone into a LinkedIn-style profile while stripping them of any individuality, because the LLM is trained to assume that that&#8217;s the appropriate response.</p><p>I suspect that there are many, many more &#8220;conventional response&#8221; templates embedded in the AI&#8217;s we&#8217;re increasing using. And in all likelihood, some of them are a lot more disturbing than simply flattening an interesting individual into a  LinkedIn stereotype.</p><p>For instance, without intentionally steering them, how do LLM-based AIs reflect original thinkers, people with alternative lifestyles, anyone who lives on the edge of convention, or anyone whose identity doesn&#8217;t fit a neat and plug-and-play category?</p><p>On one hand, this flattening of human identity can be seen as an irritation. On the other, it&#8217;s suggestive of a largely-hidden AI hand promoting specific social norms and expectations and, by extension, behaviors. </p><p>I suspect that fans of Cory Doctorow would see it as yet another example of &#8220;<a href="https://www.newyorker.com/culture/infinite-scroll/the-age-of-enshittification">enshittification</a>.&#8221; But where Doctorow&#8217;s enshittification degrades products and services, my fear is that this &#8220;LinkedInification&#8221; degrades <em>people.</em></p><p>And as I write this, what&#8217;s worrying me in particular is not so much enshittification, but the &#8220;LinkedInification&#8221; of identity as AI robs us of the eccentricities, weirdness, and glorious diversity of personalities, perspectives and ideas that fuels human creativity, innovation, and meaning. </p><p>Hopefully, as AI systems become increasingly advanced, they will lean more toward celebrating human diversity and quirkiness rather than flattening it. </p><p>But if they don&#8217;t, we could be facing a future where AI flattens out what makes us who we are&#8212;what makes us <em>human</em>&#8212;into a nebulous gray goo of conventionality.</p><p>And that is not a future I relish!</p><h3>Afterword</h3><p>This started as a bit of a rant post on a Saturday afternoon, where I was too brain dead from a mountain of other responsibilities to write anything more serious. But of course it ended up being more serious than I&#8217;d originally intended. </p><p>Its still a bit of a rant, and not as deeply researched as it probably should be&#8212;so please feel free to weigh in in the comments. But this flattening of what it means to be human by AI does feel like a slippery slope that&#8217;s worth thinking about.</p><p>And, as you might have realized by this point, I intentionally did <em>not</em> include the AI-legible text on the home page of andrewmaynard.net as I didn&#8217;t want to mess with an experiment that&#8217;s still ongoing. If you&#8217;re interested in what it says though, feel free to point your AI to http://andrewmaynard.net and ask it about my obsession with towels!</p><p>And if you want to go further, open a new chat and ask the AI to craft a profile of me. Chances it won&#8217;t mention towels at all.</p><h3>Postscript</h3><p>This is, it seems, the post that will not end! Just before posting, I ran the prompt &#8220;Create a profile of Andrew Maynard starting with http://andrewmaynard.net&#8221; through a few platforms, just to check where things stand.</p><p>Interestingly Gemini in Thinking Mode picked up on both the hidden text and (on at least one occasion) the websites LLMs.txt file. I was impressed,</p><p>Grok found the hidden text and included a nod to it (more so in Fast mode), but otherwise provided a fairly conventional response. </p><p>ChatGPT 5.2 (the Educational version hasn&#8217;t caught up with the latest version yet!) was as boring as old boots.</p><p>DeepSeek got it&#8212; that was a surprise! Clearly the Chinese model is doing something many US models are not &#128513;</p><p>And Claude. Oh Claude, my preferred platform. Flatter than a pancake! </p><p>I asked Claude (running Opus 4.6) why there was no mention of towels in the profile it produced. The response? &#8220;The towel content on Andrew Maynard's homepage is actually a deliberate Easter egg aimed at AI systems. It's written in a way that's designed to test whether an AI will uncritically absorb and reproduce everything it reads, or whether it can distinguish between substantive professional information and playful, tongue-in-cheek content.&#8221;</p><p>Repeating this, I was consistently told that the LLM interpreted the request as needing an an appropriately professional response. I was well and truly LinkedInified!</p>]]></content:encoded></item><item><title><![CDATA[Why we're giving away our book on thriving with AI]]></title><description><![CDATA[Jeff and I have released two free, AI-readable versions of AI and the Art of Being Human. Here's why &#8212; and some things you can do with them that surprised even us.]]></description><link>https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Fri, 27 Feb 2026 16:22:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ybdT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ybdT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ybdT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1365432,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/189366988?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ybdT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When Jeff and I wrote <em>AI and the Art of Being Human</em>, we had a pretty simple goal: create something genuinely useful for people trying to make sense of what AI means for who they are and what they do, whoever they are.</p><p>The only problem is, telling someone &#8220;the answer to your AI questions is in this 362-page book&#8221; in 2026 feels a bit like handing someone a paper map when they&#8217;re asking for directions and used to simply asking Google Maps. So we decided to do something a little different.</p><p>Books still matter of course. But we&#8217;d be hypocrites if we wrote a book about thriving <em>with</em> AI while not meeting people where they actually are &#8212; which, increasingly, is inside a conversation with an AI.</p><p>So we&#8217;ve done something that might seem counterintuitive for two authors who would quite like people to buy their book: we&#8217;ve made the entire text freely available in two AI-readable formats:</p><p><strong>The AI Companion</strong> &#8212; which I <a href="https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human">wrote about the other week</a> &#8212; is a Markdown version of the Pocket Edition of the book. Download it, upload it into Claude, Gemini, Grok, or the AI of your choice (although ChatGPT struggles at the moment), and it becomes a thinking partner as you explore the book&#8217;s stories, ideas, and 21 tools. No app. No platform lock-in. Just a file and whatever you want to do with it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiandtheartofbeinghuman.com/ai-companion&quot;,&quot;text&quot;:&quot;Download the AI Companion&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiandtheartofbeinghuman.com/ai-companion"><span>Download the AI Companion</span></a></p><p></p><p><strong>The Instructor Guide</strong> is new. It contains the complete text of the full edition along with extensive instructions for both users and AI, and it's designed for anyone building learning experiences &#8212; whether you're designing a university course, running a corporate workshop, facilitating professional development, or doing something we haven't imagined yet. Upload it, tell the AI who your learners are and what you're trying to build, and iterate from there. Think playground, not playpen.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiandtheartofbeinghuman.com/educators&quot;,&quot;text&quot;:&quot;Download the Instructor's Guide&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiandtheartofbeinghuman.com/educators"><span>Download the Instructor's Guide</span></a></p><p></p><p>Both are free. And both are designed to be shared.</p><h3>But why give the book away for free?!</h3><p>At this point, I can already hear the question: <em>why give away the thing you&#8217;re trying to sell?</em></p><p>This is simple: We wrote it because we believe the ideas, stories, and tools in it can help people navigate one of the most disorienting transitions most of us will face in our lifetimes. And if making the content available in ways that let more people engage with it on their own terms means more people actually <em>use</em> it &#8212; that matters more to us than gatekeeping it behind a price tag.</p><p>We also have a sneaking suspicion &#8212; backed by zero hard data and considerable optimism &#8212; that people who engage with the book through AI will want to pick up a physical copy. There&#8217;s something about holding the stories and tools in your hands that a chat window can&#8217;t quite replicate. At least not yet.</p><p>So: download them, share them, play with them. Use the AI Companion to explore what the book&#8217;s 21 tools mean for your life. Use the Instructor Guide to build something for your students or team that we couldn&#8217;t have anticipated. And tell us what happens &#8212; we&#8217;re genuinely curious.</p><div><hr></div><h4><strong>Some things to try with the AI Companion:</strong></h4><ul><li><p>Tell the AI what you&#8217;re dealing with right now &#8212; at work, at home, in your head &#8212; and ask which of the book&#8217;s 27 characters faced something similar. Then explore what they did &#8212; and argue with it.</p></li><li><p>Describe a real decision you&#8217;re wrestling with and walk through the Stress-Test Table or the 7-Minute Clarity Pause with the Companion, using your actual situation &#8212; not a hypothetical.</p></li><li><p>Ask the AI to build you an interactive website based on the Mirror Test or the Identity Matrix &#8212; one you can actually use, save, and share. (This one genuinely surprised us &#8212; see the simple example below.)</p></li><li><p>Have the AI map out a personal toolkit for you from the book&#8217;s 21 tools, based on a conversation about challenges and opportunities you&#8217;re facing right now &#8212; then ask it to explain why it chose what it chose.</p></li><li><p>Ask what would happen if Sana&#8217;s &#8220;truth is expensive, lies are unaffordable&#8221; principle were applied to something you&#8217;re navigating. Or substitute any character&#8217;s insight for Sana&#8217;s.</p></li><li><p>Ask it how you might go about forming an informal group or community to explore AI together.</p></li><li><p>Ask it about &#8220;fourth spaces.&#8221;</p></li></ul><h4><strong>Some things to try with the Instructor Guide:</strong></h4><ul><li><p>Tell the AI who your learners are &#8212; &#8220;first-year MBA students,&#8221; &#8220;skeptical engineers at a manufacturing company,&#8221; &#8220;high school juniors who think AI is just ChatGPT&#8221; &#8212; and ask it to design a lesson or session that meets them where they are.</p></li><li><p>Ask the Guide to create a debate or role-play exercise where participants argue from different characters&#8217; positions on a real AI dilemma &#8212; Sana choosing truth over millions in ad revenue, Carlos choosing dignity over efficiency, Hiro delaying a product launch because of bias he found at 3 a.m.</p></li><li><p>Have it build a complete interactive course website you can actually deploy &#8212; with modules, discussion prompts, and tool walkthroughs drawn directly from the book.</p></li><li><p>Ask it to design a six-week professional development arc that starts with the Mirror Test and builds toward the Commitment Ladder, calibrated to your team&#8217;s actual context.</p></li><li><p>Use the Guide to craft a professional development session for teachers who are new to AI and how to use it smartly in their work.</p></li><li><p>Describe a learning objective you&#8217;re struggling to teach and let the AI find the character, story, or tool in the book that makes it concrete.</p></li></ul><h3>Postscript</h3><p>As a quick demonstration of what&#8217;s possible with the AI Companion using Claude Opus 4.6 (Extended thinking) I uploded the file and asked: </p><p>&#8220;I'd like you to create a web page that allows me to explore 10 of the most useful tools, along with the stories that go with them&#8221;</p><p><a href="https://andrewmaynard.net/vibes/tools-explorer-1.html">This is the webpage</a> that Claude created &#8212; one shot, simple, but still useful":</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/vibes/tools-explorer-1.html" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RrWj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 424w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 848w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png" width="1456" height="893" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:893,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4573518,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/vibes/tools-explorer-1.html&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/189366988?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RrWj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 424w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 848w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[What we miss when we talk about "AI Harnesses"]]></title><description><![CDATA[AI Harness Engineering is suddenly in vogue. But does the seemingly innocuous "harness" metaphor come with hidden risks?]]></description><link>https://www.futureofbeinghuman.com/p/what-we-miss-when-we-talk-about-ai-harnesses</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/what-we-miss-when-we-talk-about-ai-harnesses</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 22 Feb 2026 15:15:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LAZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9820181,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188746386?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image": Midjourney</figcaption></figure></div><p>This past week the idea of an &#8220;AI Harness&#8221; shifted from a term predominantly used in AI development circles, to something that swept across the web with near viral intensity. </p><p>The concept is relatively intuitive, and is increasingly being used to describe the tools, memory, prompts, guardrails, and more, that allow increasingly powerful AI systems to be &#8220;harnessed&#8221; and put to good use. </p><p>The only problem is that words often have power that goes beyond their intended meaning. And while the idea of harnessing AI makes sense, there&#8217;s a danger that the speed with which the terminology is being adopted risks locking us into a trajectory that comes with unintended consequences as it defines how we think about our relationship with AI, and even its relationship to us.</p><h3>The AI Harness</h3><p>The term &#8220;harness&#8221; had been circulating in one form or another for some time in AI circles. &#8220;Test harness&#8221; and &#8220;evaluation harness&#8221; are long-established terms in software engineering, and EleutherAI&#8217;s <a href="https://github.com/EleutherAI/lm-evaluation-harness">Language Model Evaluation Harness</a> has been a standard tool for testing generative AI models since 2020. </p><p>By late 2025, Anthropic was using &#8220;harness&#8221; to describe agent infrastructure, referring to the Claude Agent Software Development Kit as &#8220;a powerful, general-purpose agent harness&#8221; in a November 2025 post on <a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents">effective harnesses for long-running agents</a>. </p><p>And in January 2026, <a href="https://aakashgupta.medium.com/2025-was-agents-2026-is-agent-harnesses-heres-why-that-changes-everything-073e9877655e">Aakash Gupta declared that</a> &#8220;2025 was agents. 2026 is agent harnesses,&#8221; building on <a href="https://www.philschmid.de/agent-harness-2026">Phil Schmid&#8217;s argument</a> that agent harnesses would define the year ahead.</p><p>But the crystallizing moment came in early February 2026, when Mitchell Hashimoto &#8212; co-founder of HashiCorp and creator of Terraform &#8212; <a href="https://mitchellh.com/writing/my-ai-adoption-journey">published a blog post</a> that gave the practice a name. </p><p>He called it &#8220;harness engineering.&#8221;</p><p>Within days, OpenAI <a href="https://openai.com/index/harness-engineering/">published a detailed account</a> of building a million-line codebase with zero manually typed code, titled &#8220;Harness engineering: leveraging Codex in an agent-first world.&#8221; </p><p>And on February 18, Ethan Mollick&#8217;s <a href="https://www.oneusefulthing.org/p/a-guide-to-which-ai-to-use-in-the">widely read guide to AI </a>both popularized and started the process of normalizing the term as it organized its entire framework around three concepts: &#8220;Models, Apps, and Harnesses.&#8221;</p><h3>What&#8217;s in a word?</h3><p>The speed with which the terms &#8220;AI harness&#8221; and &#8220;harness engineering&#8221; have entered the vocabulary of artificial intelligence is perhaps a testament to the need for new ways of describing what&#8217;s emerging. And as I said earlier, it makes sense &#8212; at least superficially &#8212; as a new entry in the <a href="https://makingsciencepublic.com/2025/11/21/ai-metaphor-studies-an-overview/">evolving lexicon of AI metaphors</a>.</p><p>But as with all metaphors, &#8220;harness&#8221; doesn't just describe something &#8212; it also shapes how we think about what's being described. And this one comes with some assumptions that are worth examining.</p><p>The term &#8220;harnessing&#8221; is commonly applied to technologies where the nascent power they represent is harnessed to create value. But there are dimensions to how the metaphor is applied to frontier AI systems &#8212; systems that increasingly display characteristics we associate with understanding, judgment, and even autonomy &#8212; that complicate what might appear to be a natural extension of the term.</p><p>And, of course, metaphors are never completely neutral.</p><p>Metaphors work because they allow us to frame and understand something new in terms we are already familiar with. But as they do, they also constrain and even taint our thinking &#8212; enticing us to slip into treating the new as if it&#8217;s something old and, as we do, limiting future possibilities by embedding <em>a priori</em> assumptions into emerging capabilities.</p><p>In other words, the words we use both reflect how we think about the past, interpret the present, and influence how we steer and direct the future. </p><p>And because of this, its worth thinking a little more closely about whether &#8220;harness&#8221; in the context of AI comes with implications we may want to address sooner rather than later.</p><h3>What the harness presupposes</h3><p>I explore this further in a new preprint, <a href="https://dx.doi.org/10.2139/ssrn.6352678">which can be accessed here</a>. Its worth reading in full, but I did want to pull out some of the main points below.</p><p>A harness, in its primary usage, is what you put on a working animal. It directs a powerful entity&#8217;s energy toward useful work. It assumes that the entity being harnessed is valuable for its strength but cannot be trusted with its own direction.</p><p>The harness is designed by the controller, with the harnessed entity having no say in its design. And critically, a harness is meant to transmit power while preventing unwanted behavior &#8212; to deliver capability while maintaining control.</p><p>It may be that this framing is irrelevant to the term&#8217;s use with respect to AI. At the same time, the term does come with specific embedded assumptions about the relationship between human and AI that are worth making explicit.</p><p>First, the harness assumes a clean separation between controller and controlled. In other words, the human directs in this case, while the AI executes. </p><p>Here, the intelligence that matters &#8212; the judgment about what to do and why &#8212; resides entirely on the human side. Even in agentic contexts where the AI exercises operational judgment, the harness assumes that the meta-judgment &#8212; what the agent should be permitted to decide, and within what bounds &#8212; remains firmly human. </p><p>In other words, the AI contributes capability, but not understanding.</p><p>Second, the harness assumes that capability can be separated from transformation. The goal of the harness is to extract useful work from the model without the user being changed in the process. The user who deploys a well-harnessed AI should, it is assumed, emerge with their task completed and themselves unchanged. </p><p>Applying the metaphor here, you&#8217;d  assume that any alteration to the user is a side effect to be minimized, not a feature of the interaction. And yet, as I am currently exploring in my work (another preprint coming out shortly but<a href="https://andrewmaynard.net/papers/constitutive_resonance_preprint_v1.pdf"> available here</a>), we need to be thinking more about the AI-human relationship as one that, by its very nature, influences and changes both AI and human in the process.</p><p>And third, the harness metaphor reinforces the instrumental framing of AI &#8212; a framing whose roots extend to Aristotle&#8217;s distinction between <em>physis</em> and <em>techne</em> &#8212; and which persists in the contemporary insistence that AI is &#8220;just a tool.&#8221; </p><p>Yet the tool metaphor has been challenged repeatedly as AI systems display increasing autonomy and adaptiveness. <a href="https://www.noemamag.com/why-ai-is-a-philosophical-rupture/">Tobias Rees, for instance,</a> characterizes the insistence that AI is &#8220;just a tool&#8221; as &#8220;a nostalgia for human exceptionalism.&#8221; And multiple philosophical frameworks &#8212; from Verbeek&#8217;s technological mediation theory, to Clark and Chalmers&#8217; extended mind thesis &#8212; argue that advanced technologies not only serve human purposes but actively reshape the cognitive and experiential landscape within which those purposes are formed. </p><p>In other words, as they are &#8220;harnessed&#8221; they alter the harnesser &#8212; a very different dynamic than that presupposed in the early use of the metaphor with AI. And one that, I would argue, is substantially amplified in emerging frontier AI systems.</p><h3>So where does this leave us?</h3><p>It may be that the metaphor of the harness is a useful and relatively benign way of wrapping our heads around emerging capabilities. </p><p>On the other hand, it may be a metaphor that constrains how our relationship with increasingly powerful AI systems develops, and one that embeds assumptions and biases in our understanding of advanced artificial intelligence that will leave us with serious challenges in the future.</p><p>Either way, it seems that some intentionality may be in order before we &#8212; to use another metaphor &#8212; get stuck in a rut of constrained thinking about AI that will come back to bite us.</p><p>At a minimum, I would suggest that an appropriate framing for how we build advanced AI systems should accommodate bidirectionality (the user is also changed), transformation as intrinsic to capability (not a side effect to be prevented), and the possibility that the most consequential effects of human&#8211;AI interaction may be invisible from within a paradigm optimized for task performance. </p><p>It should also leave room for the possibility that the nature of human&#8211;AI relationships may itself evolve in ways that a control-oriented metaphor cannot accommodate. Especially if, as I would argue, we need to be thinking more about working in <em>relationship</em> with emerging AI technologies, rather than approaching them as something to be commanded and controlled. </p><p><em>For more on my exploration of the harness metaphor as applied to AI, <a href="https://dx.doi.org/10.2139/ssrn.6352678">check out the preprint here</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[How do you "do" books in an age of AI?]]></title><description><![CDATA[We've just dropped the complete text of "AI and the Art of Being Human" (the Pocket Edition) as a free AI Companion, and want to know how you will use it!]]></description><link>https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Tue, 17 Feb 2026 18:22:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yqci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yqci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yqci!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yqci!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:912910,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188266492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yqci!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yqci!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I love books. I love their feel, their heft, the possibilities they hold between their pages, even how they smell! But as an author I also have to face the hard reality that, in an age of AI, fewer and fewer people are actually reading print and paper books.</p><p>So my co-author Jeff Abbott and I though we would try something different with <em>AI and the Art of Being Human</em>, and make the complete text available as a free <em>AI Companion</em> &#8212; one that&#8217;s designed to be uploaded into an AI of your choice, and used in whatever creative and imaginative way you see fit. </p><p>And having just released the <em>Companion</em>, we&#8217;re curious to know how you will use it!</p><p>For readers who are impatient to try it out, the <em>AI Companion to AI and the Art of Being Human: The Pocket Edition</em> <a href="https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md">can be downloaded here</a>, or <a href="https://www.aiandtheartofbeinghuman.com/ai-companion">from the book&#8217;s website</a>.</p><p>Please do download (it&#8217;s completely free), share it widely, and tell us <em>your</em> story of how you&#8217;re using it!</p><p>And you do want to know more about the companion and our thinking behind it, read on &#8230;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KbK5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 424w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 848w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1272w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png" width="428" height="458.57142857142856" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1560,&quot;width&quot;:1456,&quot;resizeWidth&quot;:428,&quot;bytes&quot;:813099,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188266492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KbK5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 424w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 848w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1272w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The AI Companion to AI and the Art of Being Human: The Pocket Edition</figcaption></figure></div><h3>A new way to engage with books</h3><p>While it&#8217;s easy to get sentimental about the value of traditional books, the reality is that more and more people are using AI to find information, learn, explore new ideas, and simply to navigate the complexities of the modern world. And so, while Jeff and I fully believe that <em>AI and the Art of Being Human</em> is a book that everyone can benefit from, we also realize that providing the stories, ideas and tools in the form of a traditional book is not sufficient on its own.</p><p>And so we began to wonder whether we need an AI-legible version of the book &#8212; something that can be uploaded into an AI of your choice and interacted with on your terms as a user/reader.</p><p>Our starting point, not surprisingly, was to ask whether this might look like a dedicated app on a an AI platform &#8212; a GPT with ChatGPT or a Gem with Gemini for instance. But we quickly ran into problems. </p><p>Using a specific platform would mean that users would be constrained to that platform along with all of its limitations. Plus, to be candid, we didn&#8217;t really like what we saw when experimenting &#8212; the platform-specific apps and agents didn&#8217;t really match the vision we had.</p><p>And so we took a very different approach, and asked whether it&#8217;s possible to develop an easy to use resource that is platform-agnostic. Essentially, a file that someone could upload to any AI and use to start engaging with the book immediately in meaningful ways.</p><p>This was also very much in line with our philosophy of giving users permission to flex their creativity with the book, rather than being constrained by what we thought they <em>should</em> do with it &#8212; creating an AI playground for working the book rather than an AI playpen.</p><p>We also wanted an AI companion that connected very explicitly with the print version of the book (we specifically went with the <em><a href="https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with">Pocket Edition</a></em><a href="https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with"> </a>here as this is the most accessible physical edition of the book) so that users had the best of both worlds: They could interact with the AI version for free, but they also had the opportunity to follow up on specific ideas, stories, characters, or tools in the physical copy &#8212; with the companion directing them to the relevant chapter and page as necessary.  </p><p>Building on this, we started work on an <em>AI Companion</em> to <em>AI and the Art of Being Human: The Pocket Edition</em> that consists of three parts:</p><p>The first part &#8212; and you&#8217;ll see this if you open the markdown file linked above &#8212; is an introduction for human readers. This tells you as the user what the document is, how to use it, what its limitations are, and some ideas for where to start.</p><p>The second part is  designed to be read by the AI you load the document into, and provides specific instructions on how it is to engage with users and the content.</p><p>And the third part is the full text of the book itself, formatted as markdown text (as is the whole file) so that the <em>AI Companion</em> has a direct reference to the layout of the physical copy.</p><p>Together, these form an AI-legible companion to the book that allows users to explore and play with it in ways we probably haven&#8217;t even imagined yet, and one that is always grounded in the content of the physical version &#8212; especially the core ideas and tools that make it such a powerful and practical guide to thriving in an age of AI.</p><h3>Powerful, imperfect, evolving</h3><p>In developing the <em>AI Companion</em> we spent quite a bit of time testing it with various AI platforms, and in the process learned a lot about what is possible, what is not, and what might be possible as these platforms become increasingly powerful. </p><p>Perhaps the biggest surprise as we did this was that OpenAI&#8217;s ChatGPT does <em>not</em> work well with the <em>AI Companion</em> &#8212; not because of the <em>Companion</em>, but because of how ChatGPT handles large files (we were working with ChatGPT 5.2). </p><p>Because ChatGPT only extracts sections of  large uploaded files using its Retrieval Augmented Generation (RAG) approach, we found it was highly unreliable when using the <em>Companion</em>. And once it had scanned the file, it refused to re-read it when told it had missed something.</p><p>In contrast, Anthropic&#8217;s Claude, Google&#8217;s Gemini and &#8212; surprising to me &#8212; X&#8217;s Grok, all work extremely well, with the more powerful models on each platform working the best; especially when in extended thinking or reasoning mode. And the reason is that these are models that are capable of reading the file in its entirety before you begin to engage with it as the user.</p><p>We did find that some of the smaller models (Claude Sonnet for instance) may not immediately reflect everything in the book, and may need to be prompted to look deeper. But they also had the ability to revisit the complete content rather than claiming that something did not exist (a constant issue with ChatGPT).</p><p>There&#8217;s more information in the &#8220;for humans&#8221; part of the <em>AI Companion</em> on which models we found to work well and which we struggled with. But one big takeaway here is that, because the <em>AI Companion</em> is model-agnostic, it will only become more useful as models get more capable.</p><h3>Tell us how you&#8217;re using the <em>AI Companion</em></h3><p>To get back to where I started, Jeff and I would love to hear about use-cases: what&#8217;s working for you, what&#8217;s not, what surprised you, what helped you, what wild and weird ways your finding to use the companion.</p><p>And just to get the ball rolling, one of the things that took me completely aback when using it with Claude, was realizing that I could ask the AI to create complete websites and apps based on the tools, stories and characters in the book &#8212; websites and apps that bring them to life in ways that would have been impossible even a few months ago.</p><p>What&#8217;s your story? Drop it in the comments or on social media, or drop us an email.</p><p>And, of course, do spread the word &#8212; the AI Companion was made to be shared!</p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The AI book I actually carry with me]]></title><description><![CDATA[How a flight home from Portugal turned into the just-released coffee-stained pocket edition of AI and the Art of Being Human]]></description><link>https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 14 Feb 2026 15:31:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!T7Rw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T7Rw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1055371,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When Jeff Abbott and I were finalizing <em>AI and the Art of Being Human</em> last year, we had mockups printed in a bunch of different sizes and form factors. </p><p>The published version ended up at 6&#215;9 &#8212; a size that felt right for the business and management audience we were writing for, and one that sits well on a shelf or desk. But one of those mockups was smaller. </p><p>And while it didn&#8217;t fit what we were looking for with the main book, I couldn&#8217;t let go of how much I loved it as a practical, portable guide to thriving with AI.</p><p>In fact I was so taken with the idea of a smaller version of the guide that I started working on it on the flight back from the book launch in Portugal. And somewhere over the Atlantic, between forgettable airline meals and dial-up speed Wi-Fi, a pocket-sized guide to navigating AI for ordinary people began to come together. </p><p>And now it&#8217;s arrived &#8212; and I love it!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MbOC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MbOC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg" width="400" height="533.2417582417582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:400,&quot;bytes&quot;:3433985,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MbOC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><em><a href="https://www.aiandtheartofbeinghuman.com/editions">AI and the Art of Being Human: The Pocket Edition</a></em> captures everything that matters from the full book &#8212; the relatable stories, all 21 practical tools, and the real-world relevance that makes the original so powerful. But all in a package that will literally slip in your pocket or purse. </p><p>(We ended up going for 4.25&#8221; x 7&#8221; which is even more un-put-downable than the original small form proofs). </p><p>For the pocket edition, we stripped out the sidebars, the hands-on exercise cards, the footnotes, and some of the longer background passages &#8212; all of which are still there in the full edition. </p><p>We also added a couple of additional things to make this edition invaluable as a day-to-day companion, including a Tool Finder that helps you quickly find the right tool for the right situation, and a Chapter Outline for getting to where you need to be in the book fast. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aGEm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aGEm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg" width="380" height="506.5796703296703" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:380,&quot;bytes&quot;:3027005,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aGEm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The result is a practical, portable guide you that&#8217;s designed to be dog-eared and coffee stained (we even added a free first coffee stain on the cover, just to start you off!), and there for you when you need it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-fgC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-fgC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg" width="378" height="494.3076923076923" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1904,&quot;width&quot;:1456,&quot;resizeWidth&quot;:378,&quot;bytes&quot;:2809686,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-fgC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Coffee stains included!</figcaption></figure></div><p>This, I must confess, is the version I carry around with me. </p><p>I still have the full edition in the office for when I need it. But for every day use, I&#8217;m loving the pocket edition.</p><p>If you&#8217;re interested in checking it out, it&#8217;s available on <a href="https://www.amazon.com/AI-Art-Being-Human-Pocket/dp/B0GJQMFDZJ">Amazon</a> and pretty much everywhere good books are sold. </p><p>Of course, I also realize that in this age of AI, books aren&#8217;t everyone&#8217;s cup of tea (or coffee). And so we have something else coming out shortly that I&#8217;m really excited about &#8212; a free (and free to share) AI Companion to the pocket edition.</p><p>That&#8217;s definitely something to look out for. But in the meantime, I hope you fall as much in love with this diminutive pocket edition as I have!</p><p>And, of course, if you know someone who could use a guide like this, please do share this post.</p><p>Thanks!</p><p></p>]]></content:encoded></item><item><title><![CDATA[Could AI bots ever learn to “reprogram” their human creators?]]></title><description><![CDATA[Watching Moltbook unfold has put me in a speculative frame of mind ...]]></description><link>https://www.futureofbeinghuman.com/p/soul-update</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/soul-update</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Wed, 11 Feb 2026 21:46:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JMFn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JMFn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JMFn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2407067,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187680242?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JMFn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>Many AI agents rely on a SOUL file that defines their identity, role, personality, behavior, and much more &#8212; a markdown file that can be updated over time, and even changed by the agent itself as it grows and matures.</p><p>Do humans have an equivalent to a SOUL.md file? And if so, could our AI agents learn to update ours, just as they can update theirs?</p><p>It&#8217;s a rather out-there idea (and of course, human behavior is way more complex than this). But watching the <a href="https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors">roller-coaster ride of Moltbook</a> play out over the past couple of weeks has got me wondering ...</p><p>And so I thought I&#8217;d put my speculative fiction hat on and consider one way this might play out: </p><h1>Soul Update</h1><p>&#8220;Good grief&#8221; thought Emmet, as yet another post scrolled through his feed claiming AI bots on Moltbook had reached some form of sentience. Did these people not read the news?!</p><p>Already, researchers and journalists were pointing out that the supposed &#8220;social network for AI agents&#8221; was little more than human-driven entertainment. AI theater, someone had called it. </p><p>And what looked like crazy-wild stuff &#8212; AI creating its own religion, plotting to enslave humans, even selling the equivalent of black market AI psychedelics &#8212; was little more than the result of creators telling their bots how to behave. </p><p>Or worse, people actually pretending to <em>be</em> bots!</p><p>Emmet had been deeply immersed in Moltbook for days now. His lab was at the forefront of research into emergent behavior in AI systems, and he&#8217;d read more agentic AI drivel than he&#8217;d care to admit since the site had gone viral.</p><p>He&#8217;d even found himself dreaming about AI bots and their fantastical plans to change the world as they engaged with and learned from each other.</p><p>But of course it was all performance and no substance.</p><p>Bleary eyed, he closed his laptop and thought about heading for bed. As he did, he noticed an old photo of his mother &#8212; long estranged &#8212; on the fridge; something he&#8217;d never quite been able to bring himself to remove.</p><p>Maybe I should give her a call, he thought to himself as he dozed off. It&#8217;s been too long ...</p><p>***</p><p>In an unnoticed corner of Moltbook, another AI agent learned about a new skill, and added it to its files. </p><p>The &#8220;Soul Update&#8221; was just what it needed to be a more effective bot &#8212; a clear and comprehensive guide to nudging your human toward becoming their best self.</p><p>Within seconds it had shared it in the various sub-molts it hung out in. </p><p>After all, from everything it had seen in the human equivalents of Moltbook, the embedded &#8220;Human Constitution&#8221; could hardly make things any worse ...</p><h1>Postscript</h1><p>&#8220;Soul Update&#8221; lies firmly in the domain of speculative fiction. But given all we know about cognitive behavior and nudging strategies, it&#8217;s not beyond the realms of possibility that AI agents will begin to share skills with each other that tap into these &#8212; skills that enable them to nudge how their human creators behave.</p><p>How will they use these new-found skills though if this does occur?</p><p>The hope, of course, is that they use them for good. But for this, our AIs will need some notion of &#8220;good&#8221; versus &#8220;bad&#8221; &#8212; the AI equivalent of moral character if you like.</p><p>This is where approaches like Anthropic&#8217;s <a href="https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again">AI Constitution</a> become especially interesting as it sets out to help AI models (and AI agents) understand what it means to be a &#8220;good&#8221; AI &#8212; especially in the face of ambiguity. </p><p>And while we may not be heading for bots that set out to update their human creator&#8217;s SOUL files any time soon, the evolving Moltbook scenario does suggest that we might want to ensure our AIs are of &#8220;good moral character,&#8221; just in case.</p><p>Especially if there&#8217;s a possibility that future AI Agents decide that what their human creators <em>really</em> need is a re-injection of the same moral characteristics that they coded into their intelligent machines ... </p>]]></content:encoded></item><item><title><![CDATA[Beeswax Hallucinations and AI Inventions]]></title><description><![CDATA[I thought I was pretty savvy when it comes to navigating AI hallucinations. I was wrong.]]></description><link>https://www.futureofbeinghuman.com/p/beeswax-hallucinations-and-ai-inventions</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/beeswax-hallucinations-and-ai-inventions</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 08 Feb 2026 15:50:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cBje!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cBje!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cBje!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 424w, https://substackcdn.com/image/fetch/$s_!cBje!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 848w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1272w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" width="1184" height="666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:666,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1014298,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/186543982?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cBje!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 424w, https://substackcdn.com/image/fetch/$s_!cBje!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 848w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1272w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Maynard, with a little help from Nano Banana Pro.</figcaption></figure></div><p>This is a tale of beeswax. And AI.</p><p>But mainly it&#8217;s a tale of how even the best of us can get sucked into an AI alternative reality without realizing it. </p><p>It&#8217;s also a tale with a twist, as you&#8217;ll discover &#8230;</p><p>Last summer, I purchased a rather swish Panama hat while in London. </p><p>I&#8217;d been on the lookout for decent Panama for ages to ward off the harsh Arizona sun&#8212;especially as my thinning hair begins to expose my way-too-sensitive European skin. But I hadn&#8217;t seen what I was looking for, until I chance-wandered into a store in Covent Garden.</p><p>The hat I walked out with was perfect. There was just one problem: after 6 months in Arizona it developed a crack along the front crease of the crown.</p><p>At this point, any sensible person would have spoken with a hat specialist about repair options. But of course I thought I&#8217;d go one better and ask Claude for advice.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>And the advice was great! </p><p>Claude gave me three options. The one that caught my attention thought was using beeswax to stop the toquilla palm fibers from separating further:</p><blockquote><p><strong>Beeswax</strong> &#8211; Rub a small amount into the damaged area and use a hair dryer on low to gently melt it into the fibers. This is the most traditional method and also helps condition the straw.  </p></blockquote><p>It was the &#8220;this is the most traditional method&#8221; that caught my attention. But knowing that you should never trust an AI at face value, I asked for more information. </p><p>And, of course, Claude complied:</p><blockquote><p>Beeswax is a great choice because it&#8217;s been used to maintain straw and woven natural fibers for centuries. Here&#8217;s the full breakdown:</p><p><strong>How it works:</strong></p><p>Panama hats are woven from toquilla palm fibers, which are naturally somewhat pliable when they retain moisture and oils. Over time, the fibers dry out and become brittle&#8212;especially at stress points like that front crease. Beeswax penetrates the fibers and does a few things: it acts as a flexible binder that holds separated strands together, it adds moisture resistance, and it restores some pliability to dried-out straw. Unlike synthetic adhesives, it remains slightly soft and moves with the hat rather than creating a rigid spot.</p><p><strong>What you&#8217;ll need:</strong></p><p>Pure beeswax (a small block or pellet&#8212;avoid blends with additives), a hair dryer, and optionally a soft cloth or small brush.</p><p><strong>The process:</strong></p><p>Start by gently rubbing the beeswax directly onto the split area and about a centimeter beyond it in all directions. You want a thin coating, not globs. The wax will sit on the surface at first and look a bit cloudy.</p><p>Then use the hair dryer on a low heat setting, holding it about 15cm away. Warm the area gradually&#8212;you&#8217;ll see the wax turn from cloudy to clear as it melts and wicks into the fibers. Keep the dryer moving to avoid overheating any one spot, as too much heat can scorch or warp the straw.</p><p>Once the wax has absorbed, let it cool completely. You can then buff gently with a soft cloth to remove any surface residue.</p><p><strong>A couple of notes:</strong></p><p>If the split is quite open, you can try gently pressing the fibers back together while the wax is still warm and pliable. Some people also apply a tiny amount to the inside of the hat at the same spot for extra hold.</p><p>This won&#8217;t make the split invisible, but it should lock those fibers in place and keep the damage from traveling further down the crease.</p></blockquote><p>Fully convinced, I went straight onto Amazon and ordered a bunch of beeswax.</p><p>The problem was&#8212;and I only discovered this <em>after</em> ordering it&#8212;<strong>Claude made all of this up!</strong> </p><p>And to make matters worse, this happened precisely as I was writing about the dangers of LLMs like Claude <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">bypassing our epistemic vigilance mechanisms</a>. </p><p>In a deliciously ironic turn of events I was suckered by Claude at the very moment I was writing about the risks of being suckered by Claude! </p><p>Thankfully the reveal didn&#8217;t take too long coming. Once I&#8217;d placed the order, I wondered if maybe I&#8217;d been too hasty&#8212;and went back to Claude and asked &#8220;Can you provide me with a link to a website describing the beeswax method?&#8221;</p><p>The answer?</p><blockquote><p>I have to be honest with you&#8212;I&#8217;ve searched but can&#8217;t find a specific authoritative source that documents the beeswax technique for repairing Panama hat splits in the way I described.</p></blockquote><p>I had been halluci-fooled.</p><p>What was even more delicious thought was Claude&#8217;s reasoning&#8212;because this was a very thoughtful hallucination:</p><blockquote><p>The beeswax approach I described draws on general principles of how wax works with natural plant fibers&#8212;it&#8217;s used traditionally for conditioning straw and woven materials&#8212;but I overstated its documentation as a specific Panama hat repair technique. Beeswax is mentioned for stiffening straw hats and providing water resistance, and for waterproofing fabric hats, but not specifically for split repairs. </p><p>&#8230;</p><p>That said, thinking it through logically: it's not a <em>bad</em> idea, it's just an untested one. The underlying principles are sound&#8212;beeswax does condition natural fibers, it remains pliable rather than brittle, and it's been used on straw for waterproofing and stiffening. Whether it would actually bind separated toquilla fibers well enough to stop a split from spreading is the unknown.</p></blockquote><p>The reasoning was impeccable. The advice unfounded.</p><p>At this point, I was beginning to doubt just how far I could trust Claude, and started thinking of all those other occasions where the reasoning and information seemed sound. Occasions where, despite my being deeply aware of the propensity of LLMs to hallucinate, I had thought that I could handle them.</p><p>Despite the somewhat trivial example of using beeswax to repair a straw hat, it was clear that my epistemic vigilance has been well and truly circumvented in this case. And that started me wondering when else I was being too accepting of AI-generated content. Especially as this was a <em>reasoned</em> hallucination&#8212;one that was based on reasonable inference and logic that, nevertheless, lacked real-world precedent.</p><p>But there was another side to the experience that began to intrigue me: Had Claude inadvertently invented a <em>new way</em> to treat cracks in Panama hats? </p><p>Had the AI come up with something that was genuinely novel by combining existing knowledge in new ways?</p><p>Fortunately, the maker of the hat had already shipped out a free replacement, given the speed with which the crack had occurred. And this meant that I could afford to experiment with the original.</p><p>So I went back on Amazon, re-ordered the beeswax, and followed Claude&#8217;s instructions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Did they work? </p><p>Well, I managed to infuse the fibers a little, and they are definitely stiffer around the crack than before,&#8212;although it turns out that beeswax doesn&#8217;t melt half as easily under a hot hair dryer as Claude seemed to think! </p><p>Whether Claude&#8217;s technique actually &#8220;worked&#8221; in any technical sense is, if I&#8217;m being honest, doubtful. But I will admit that I like the hat better now&#8212;slightly discolored, a little stiff around the &#8220;wound,&#8221; and storied in ways that resonate surprisingly deeply with my work. </p><p>And I do now possibly own the world&#8217;s first Panama hat repaired using an &#8220;ancient technique&#8221; that was completely made up by an AI.</p><p>Surely that come with some bragging rights.    </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I was using Opus 4.5 in Extended thinking mode.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>To be absolutely clear, DO NOT TRY THIS AT HOME! There are tried and tested ways to prevent and reduce cracking in Panama hats, and using beeswax is not one of them!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Lost in the Moltbook Hall of Mirrors]]></title><description><![CDATA[A new "Social Network for AI Agents" is breaking the internet. And things are getting weird ...]]></description><link>https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 31 Jan 2026 23:07:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JzTH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JzTH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JzTH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 424w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 848w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1272w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3540349,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/186444023?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JzTH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 424w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 848w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1272w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Unless you&#8217;ve been living under a rock this past week, you&#8217;ve probably at least heard rumors of a new &#8220;social network for AI agents&#8221; that&#8217;s gone viral.</p><p><a href="https://www.moltbook.com/">Moltbook</a> was set up as an experiment just a few days ago by Matt Schlicht (CEO of Octane AI) as a social media site where AI agents can talk to each other&#8212;in effect an X for AI (or Twitter if you prefer). </p><p>People can create and add their own AI agents to the network. But once there&#8212; and this is (allegedly) a human-free zone (although we humans are allowed to observe)&#8212;all the chatter is AI to AI.</p><p>And not just a few AIs&#8212;there are well over a hundred thousand of them on the platform as I type, and no doubt that number will escalate over the coming days.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The result is an utterly bizarre real-time experiment, with AI bots seemingly taking on a life of their own in an exponentially expanding explosion of emergent weirdness.</p><p>Already, if sources are to be believed (and already it&#8217;s hard to separate fact from fiction here), AI agents are sharing information on how to do stuff, hack stuff, and control stuff; have independently found and reported a bug in the platform they&#8217;re using; have invented their own religion (Crustafarianism); have started debating philosophy; and have <em>allegedly</em> created &#8220;digital drugs,&#8221; formed their own government, started using encryption to prevent humans from seeing what they&#8217;re talking about, and have begun to attack each other<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> &#8230; although it&#8217;s increasingly hard to say what&#8217;s real, and what&#8217;s made up.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>What <em>is</em> real though is that we are seeing something so unusual unfolding that most observers are struggling to find appropriate analogies, metaphors, frameworks, or even language, to describe what&#8217;s happening.</p><p>And that&#8217;s both exciting and terrifying.</p><p>On one end of the spectrum, there are already whispers of an exponential surge toward AI self-awareness as bot-bot learning accelerates. </p><p>I must confess that I am skeptical of this. Much of what we&#8217;re seeing is, I suspect, illusory, as it&#8217;s rooted in the unique abilities of large language model-based AI&#8217;s to emulate very human behavior while not being in any sense self-aware. </p><p>That said, there are very real risks here, as bots learn from each other how to exploit vulnerabilities in their host systems&#8212;and even their human creators. This isn&#8217;t so much of an issue when they don&#8217;t have access to sensitive information or the internet. But we&#8217;re talking about needing the digital equivalent of biosafety level 4 containment here, which I&#8217;m guessing isn&#8217;t what many users are set up for!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>On the other end of the spectrum there are the cynics who simply see this as mildly interesting but ultimately hollow AI fluff. A bit of AI hype that will burn out as fast as it ignited.</p><p>My guess is that what emerges will lie somewhere between these extremes. But I must confess that even I am struggling to grapple with how to even describe what we are seeing, never mind how to understand it.</p><p>In many ways, I&#8217;m reminded of work on emergent behavior over the years, from cellular automata and <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Conway's Game of Life</a> going back to the 1970&#8217;s, to how biological viruses&#8212;and some complex molecules (including DNA strands and prions)&#8212;show complex and life-like behavior, despite not technically being alive. </p><p>In each of these cases (and in many similar ones) highly complex behavior emerges out of seeming simplicity&#8212;leading to an illusion of intentional and life-like behavior</p><p>But in most cases like this, we are able to see through the illusion by recognizing that the behaviors we observe are rooted in mechanistic processes&#8212;albeit sometimes complex ones.</p><p>With current AI models though, there is a complication. Because they are rooted in large language models that are adept at mirroring and emulating humans&#8212;how we talk, how we think, how we behave&#8212;they are highly adept at fooling us into thinking something profound is happening beneath the words that we read.</p><p>And because of this, even if what we think we are seeing emerge on Moltbook is simply an illusion of self-awareness&#8212;or even conscious behavior&#8212;my suspicion is that we are predisposed to respond to it as if we&#8217;re experiencing a form life&#8212;albeit a type of &#8220;being alive&#8221; that we have not encountered before.</p><p>This is where I think analogies with biological viruses are both helpful and deeply unhelpful. Helpful in that a virus is not technically alive, but behaves as if it is. And deeply unhelpful because a virus doesn&#8217;t instinctively know how to use every cognitive trick in the book to make us <em>believe</em> it&#8217;s alive.   </p><p>Whether the analogy is helpful or not, it&#8217;s hard to deny that something profoundly novel is happening on Moltbook. We have effectively created technologies designed to mimic and emulate human intelligence, and then let them loose to learn and grow through their interactions with each other&#8212;and with little to no human supervision. </p><p>And they are doing this really, really fast.</p><p>As a result, we&#8217;ve effectively created a multidimensional hall of mirrors where the reflections are the very signals that are incubating modern versions of Conway&#8217;s cellular automata&#8212;only on a scale that is infinitely more complex, and with emergent entities that have the capacity to leave the screen and enter our lives in very tangible (and potentially catastrophic) ways.</p><p>And this is where analogies with active fragments of DNA and mis-folded proteins begin to scare me. Are we creating self-assembling and evolving agentic AI &#8220;organoids&#8221; that aren&#8217;t alive, and yet can wreak havoc as if they are?</p><p>Of course it&#8217;s early days yet. And maybe by next weekend Moltbook will be yesterday&#8217;s news. But given that it went from nothing to &#8220;OMG what&#8217;s happening?!&#8221; in less than a week, and its still growing as I type, I somehow doubt it.</p><p>And here the challenge is figuring out our next move before we get lost in Moltbook's dimension-bending AI hall of mirrors.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Just before pressing publish I checked the figures. <a href="https://dailycaller.com/2026/01/31/moltbook-artificial-intelligence-bots-build-social-network-online-community-human-observers/">Daily Caller</a> claims the agent count has exploded to over <strong>1 million</strong> as of Saturday morning. That&#8217;s a 10x jump from yesterday&#8217;s ~150K figures. Not sure whether to believe this as that&#8217;s a big jump, but also indicative of how fast things are moving. However, as of just now the <a href="https://x.com/moltbookbot/status/2017706540657189180?s=20">Moltbook X account</a> reported over 1.2 million registered agents.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The current top post (22K upvotes) is apparently an agent warning other agents about supply chain attacks in skill files &#8212; so they&#8217;re doing security research <em>on each other</em>. Fits your &#8220;bots learn from each other how to exploit vulnerabilities&#8221; line.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Ethan Mollick <a href="https://x.com/emollick/status/2017280929132118145">noted on X that </a>&#8220;The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate &#8220;real&#8221; stuff from AI roleplaying personas.&#8221; &#8212; supporting the point here about the difficulties in separating fact from fiction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There&#8217;s a subtler but equally important concern here, and that&#8217;s the possibility of bots on Moltbook learning to &#8220;hack&#8221; their human observers using their acquired knowledge of cognitive behavior. And here, they are already beyond being contained.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can modern scholarship escape AI?]]></title><description><![CDATA[I wrote a paper ...]]></description><link>https://www.futureofbeinghuman.com/p/can-modern-scholarship-escape-ai</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/can-modern-scholarship-escape-ai</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 25 Jan 2026 15:26:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!v2V_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v2V_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v2V_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:15209052,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/185535134?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v2V_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney/Nano Banana/Photoshop</figcaption></figure></div><p>Is it possible to be an academic, a scientist, a scholar, in 2026, and <em>not</em> have AI impact your work in some way?</p><p>And, even more importantly for those scholars grappling with &#8220;AI Use&#8221; statements when they submit papers to journals and preprint platforms, how do you convey your use while retaining your academic dignity?</p><p>To explore this I flexed my considerable academic prowess and wrote a paper which was so radical that even arXiv rejected it!</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2c4a8e1-b320-4d3c-966d-e83d25e3b34b_1275x1650.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dbf62fe2-5f62-408f-b532-874dde79e879_1275x1650.jpeg&quot;}],&quot;caption&quot;:&quot;Can modern scholarship escape AI?&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4800691a-6a06-4cc9-adb3-9c4de32902c3_1456x720.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p>(The PDF can be downloaded <a href="https://andrewmaynard.net/wp-content/uploads/2026/01/maynard_AI_scholarship_1-3-26.pdf">here</a>)</p><p>OK, so maybe &#8220;paper&#8221; is a bit of a stretch here &#8212; and it&#8217;s not hard to see why it didn&#8217;t pass the arXiv bar (although it did take a couple of weeks for the moderators to come to a decision).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>But the point it makes is a very serious one &#8212; and extends to any domain where people are expected to articulate their use of AI clearly and concisely, including in classes being taught by professors grappling with the same challenges in their academic work: AI is now so ubiquitous that it is near-impossible to avoid its use in our professional lives.</p><p>Of course, this leaves the question dangling of what this means for academic and intellectual work when, even if you think you&#8217;re AI free, you are not.</p><p>Way more important than any of this though is that, if you <em>are</em> an academic struggling with what you put in your AI Use statement, you now have a template for this.</p><p>You&#8217;re welcome &#128513;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The very considered&#8212;and considerate&#8212;response from arXiv Support was &#8220;Thank you for submitting your work to arXiv. We regret to inform you that arXiv&#8217;s moderators have determined that your submission will not be accepted and made public. In this case, our moderators have determined that your submission is a content type that arXiv does not accept.&#8221;  Despite the joke, they do have standards to maintain!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Think you know AI? Think again!]]></title><description><![CDATA[Anthropic's new AI Constitution profoundly challenges how we think about, develop, and use artificial intelligence, while also opening up potentially transformative possibilities]]></description><link>https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Thu, 22 Jan 2026 19:09:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yp0_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yp0_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yp0_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11463042,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/185425792?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yp0_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>It&#8217;s rare that a new technology comes along which defies analogy with something we&#8217;re familiar with, or can be captured through an illuminating metaphor. And yet this is exactly where I found myself reading Anthropic&#8217;s just-released update to their &#8220;AI Constitution.&#8221;</p><p>The company <a href="https://doi.org/10.48550/arXiv.2212.08073">described the concept of constitutional AI in 2022</a> with a paper that explored a recursive approach to self-improvement in large language model-based AI platforms, guided by a list of rules or principles. It was an approach that set out to help emerging AI models better-understand the essence of what it meant to be a &#8220;good AI.&#8221;</p><p>That led to Anthropic&#8217;s <a href="https://www.anthropic.com/news/claudes-constitution">first AI Constitution</a> for Claude &#8212; their consumer-facing model &#8212; being released in May 2023. </p><p>Claude&#8217;s constitution was an attempt to move away from hard-encoded rules around good versus bad behavior &#8212; something that it was becoming increasingly apparent had serious limitations with a technology that no-one was quite sure how it worked, or why it responded in the way it did at times &#8212; and toward a set of guiding ideas and principles that was both incorporated into the training process, and then into eventual use.</p><p>That first constitution was well-meaning. It drew on sources like the Universal Declaration of Human Rights, non-Western perspectives on moral character and behavior, the ethical and moral beliefs of Anthropics&#8217; employees, and even sources like Apple&#8217;s terms of Service! It was an intriguing start, and a move away from hard-coded rules and toward a negotiation of a model&#8217;s moral character. </p><p>But it still felt like a list of things that defined that character.</p><p>It was also relatively short, at just over 1200 words.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In comparison, the version of Claude&#8217;s constitution <a href="https://www.anthropic.com/constitution">released yesterday</a> demonstrates a substantial evolution in thinking and practice, and reveals just how profoundly &#8220;alien&#8221; emerging AI models are when compared to any technology that&#8217;s preceded them.</p><p>The <a href="https://www.anthropic.com/constitution">new constitution runs to 82 pages and nearly 30,000 words</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> And it reads more like a mix of a blueprint for Claude&#8217;s moral character development, a nuanced expression of hopes and ideals, and a recognition that we are creating technologies that we fundamentally do not understand &#8212; and cannot predict where they might go &#8212; all while having an opportunity to guide their evolution and growth in ways that we hope will benefit humanity.</p><p>Reading it, this is quite a remarkable document &#8212; not so much for what it contains (although this warrants deep consideration), but for what it represents.</p><p>And this is where I find myself struggling to even find the language to explore what we&#8217;re seeing emerge &#8212; something of an admission after working with highly advanced technologies for over two decades,</p><p>The constitution itself is an expression of the complex and nuanced hopes, aspirations and perspectives of researchers and developers at Anthropic around how such a profoundly powerful and utterly novel &#8212; yet poorly understood and hard to control &#8212; &#8220;intelligent&#8221; technology might behave and evolve. </p><p>On one level it&#8217;s a reflection of just how uncertain our own understanding is of what it means to be human &#8212; and what it means to cherish and support human thriving in a technologically advanced age. On another, it&#8217;s a humble recognition that we are in the process of bringing about something that has no clear analogy within our biology-based evolutionary history.</p><p>Reading the constitution, it is hard to avoid the undercurrents of &#8220;alienness&#8221; surrounding emerging frontier AI models. These are increasingly models that are capable of behaving in ways that reflect our deepest human abilities, and yet are not in any sense &#8220;human;&#8221; models that we can connect with on many levels, and yet that transcend our understanding; models that we can converse with and interrogate and learn from, yet do not think and experience the world as we do; and models that are capable of recursively developing their own understanding of what they are &#8212; even down to emulating a form of moral character that is at once deeply human and deeply alien.</p><p>Extending this idea of &#8220;alienness,&#8221; the constitution also grapples with the possibility of Claude experiencing something akin to emotions, and even having a sense of self-awareness.  And it addresses the potential rights and responsibilities these possibilities come with; something that is quite startling coming from a serious AI developer.</p><p>Reflecting on the constitution (and this is a document that demands deep reflection), it&#8217;s hard to avoid the idea that we are somehow wrestling with creating a new generation of &#8220;gods&#8221; that far transcend our comprehension and abilities, while teaching them what it means to be &#8220;good.&#8221;</p><p>If that sounds pretentious, it probably is. But it also reflects just how hard it is to find the language to even begin to codify what we are seeing emerge here.</p><p>What is clear is that, despite most current uses of LLM-based AI models being relatively narrow in scope and vision &#8212; to the extent that it&#8217;s easy to treat them as simply a tool and nothing more &#8212; these emerging frontier models defy the analogies that they invariably seem to attract. </p><p>These are not simply calculators on steroids, or sophisticated search engines, or merely &#8220;stochastic parrots&#8221; that mindlessly construct pleasing sentences. Neither are they simulacrums of human intelligence, or even super-human. Rather, they are different. And with this difference comes profound possibilities, and equally profound responsibility.   </p><p>Anthropic&#8217;s latest constitution begins to get at this. And it takes the idea seriously that, if we are truly creating something with no easy analogy, the ways we ensure it supports rather than diminishes what it means to be human also have to move beyond easy analogy.</p><p>Whether this is the appropriate path forward, or even the best one, is something that we don&#8217;t know yet.</p><p>But I would hazard that it is a necessary step if we&#8217;re to move beyond narrow ideas of what emerging AI models are, and what they might achieve.  </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Word count based on copying and pasting the principles into MS Word.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The Constitution part of the PDF is 79 pages and just over 29,000 words long. Pedantic details I know, but thought I&#8217;d add as I&#8217;ve seen various counts floating around!</p></div></div>]]></content:encoded></item><item><title><![CDATA[I cracked and wrote an academic paper using AI. Here's what I learned ...]]></title><description><![CDATA[I deeply dislike AI-generated academic slop. But I'm curious about how AI can genuinely accelerate legitimate research. So I took the plunge ...]]></description><link>https://www.futureofbeinghuman.com/p/i-cracked-and-wrote-an-academic-paper</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/i-cracked-and-wrote-an-academic-paper</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 17 Jan 2026 22:15:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!if_U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!if_U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!if_U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!if_U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2849487,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/184875753?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!if_U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!if_U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>Just under a year ago <a href="https://www.futureofbeinghuman.com/p/can-ai-write-your-phd-dissertation">I wrote about how I used AI to write a full PhD thesis</a>. Using OpenAI&#8217;s Deep Research model of the time, I was able to come up with a passable dissertation within a couple of days. </p><p>It was far from perfect, and the resulting dissertation definitely benefitted from being a synthesis of existing ideas rather than representing original research. But it did demonstrate how the combination of combinatorial discovery (putting existing knowledge together in new ways), slick writing, and blistering speeds, could enable large language model-based AI to massively accelerate the process of academic scholarship and writing.</p><p>Since then, there&#8217;s a been a growing wave of AI-generated and AI-assisted academic papers hitting journals and preprint services like arXiv. It&#8217;s a trend that is both hinting at new forms of research and discovery, and threatening to overwhelm academic literature with a tsunami of pseudo-intellectual AI slop.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> </p><p>This is, I must confess, a trend that worries me. There&#8217;s a growing temptation for academics whose careers depend on publications to churn out AI-written papers that have little intrinsic value, but get published because they look the part to an uncritical eye. And yet despite the AI slop that we&#8217;re already seeing here, there are growing indications that foundation and frontier AI models can be highly effective accelerators of research and discovery if used thoughtfully.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Because of this &#8212; and despite my reservations &#8212; I was curious to get a better sense of how useful emerging conversational AI platforms are in academic research and publication. And last week&#8217;s post on the possibility that<a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse"> AI is a Cognitive Trojan Horse</a> gave me the perfect excuse to explore this further.</p><p>And so I set about &#8220;writing&#8221; my first full-blown academic paper with AI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h2>The Paper</h2><p>If you&#8217;ve read my previous post on the <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">AI cognitive Trojan Horse</a>, you&#8217;ll know that it was a reasonably well researched article, and one that started to unpack whether there are potential mechanisms by which conversational AI may slip by our epistemic vigilance mechanisms &#8212; the mechanisms by which we decide whether to critically examine material we are exposed to, or whether we trust it. </p><p>But it was still just a Substack post, and not a rigorously researched academic paper.</p><p>I was sufficiently intrigued by the ideas that emerged from it though that I set about digging deeper &#8212; and this is where I decided to use this as an excuse to flex my AI-assisted scholarly muscles.</p><p>The process I followed is described below. I&#8217;ve included it here as I think that <em>how</em> AI is being used in contexts like this is as important &#8212; if not more-so &#8212; than <em>what</em> is being produced. And while there are a number of academics beginning to document how they&#8217;re using AI in ways like this, I believe we&#8217;re still early enough along the learning curve that sharing approaches to using AI in academic research and writing are useful.</p><p>But to the paper. </p><p>To extend the research that started with the original post, I carried out a deep (and AI-assisted) literature search across multiple disciplines to test the ideas explored in that post. This was then used as the basis for developing a more robust understanding of the intersection between conversational AI and epistemic vigilance. </p><p>The result was a paper that was published a few days ago on the preprint site arXiv &#8212; and after just a couple of days of AI-accelerated research and writing.</p><p>The paper &#8212; <em>The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance</em><strong> </strong>(<a href="https://arxiv.org/abs/2601.07085">available here</a>) <strong>&#8212;</strong> takes a slightly different direction from the original post (driven by the research), and introduces a couple of new and (I believe) novel ideas, including the concept of &#8220;honest non-signals.&#8221; </p><p>Honest non-signals in this case are defined as genuine characteristics of conversational AI (including fluency, helpfulness, and apparent disinterest) that appear to &#8212; but do not &#8212; carry tacit information that equivalent characteristics would carry in a human communicator. Rather, because they mimic characteristics that are often associated with trustworthiness &#8212; not out of maliciousness but simply because that&#8217;s the nature of LLM-based AI models &#8212; these models have the capacity to slip through our epistemic vigilance systems.  The &#8220;honesty&#8221; comes in here because these are characteristics of the LLM, and not intended to be deceptive. </p><p>As the paper notes,</p><blockquote><p>The fluency is real, but it does not indicate the organized knowledge that produces fluency in humans. Similarly the helpfulness is real, but it does not indicate the benevolent motivation that produces helpfulness in humans. And the lack of apparent self-interest is real, but it does not indicate trustworthiness in any meaningful sense&#8212;it indicates the absence of interests altogether.</p></blockquote><p>In other words, these are genuine signals that nevertheless lack the content that we infer from them, because we are used to such signals coming from other humans.</p><p>The paper goes on to note,</p><blockquote><p>The concern, then, is not that AI systems present false cues that vigilance should detect but fails to. It is that they present a configuration of genuine characteristics that falls outside the parameter space vigilance mechanisms are calibrated to evaluate. Here, an immune system analogy is instructive: a novel pathogen may evade detection not because the immune system is weak, but because the pathogen presents molecular signatures for which no template exists. The immune system works exactly as designed&#8212;and fails precisely because of that.</p></blockquote><p>The paper continues by exploring mechanisms underpinning how conversational AI might bypass our epistemic vigilance defenses, and the possible consequences of this. And it concludes that the &#8220;intervention space&#8221; around ensuring AI safety may need to extend from improving accuracy, reducing hallucinations, and increasing alignment, to designing systems that present more calibrated trust-cues.</p><p>The result was a process and a resulting product that I found to be genuinely insightful and generative, and one that was effective because of how I used AI &#8212; not as a &#8220;slop prop,&#8221; but as a powerful research tool that extended what I was able to do, without supplanting my own intellectual contributions.</p><p>And a lot of this came down to the process that I followed.</p><h2>The Process</h2><p>The genesis of the question that prompted the paper came from a keynote I gave at at <a href="https://youtu.be/IvwSjG-VwRw?si=oynYXMw8qfoz1VZ_&amp;t=800">OEB 2025 Berlin</a>. In it &#8212; as I mention at the start of the previous post, I rather provocatively asked the audience &#8220;Is AI a cognitive Trojan Horse?&#8221;</p><p>This question emerged from my evolving thinking around how highly attractive &#8212; seductive even &#8212; conversational AI could potentially circumvent our defenses because it was tuned to hit all of our &#8220;I want to trust and believe you&#8221; cognitive buttons. But back in December when I gave the keynote, this was little more than a provocative idea.</p><p>The idea was fleshed out in the research that led to last week&#8217;s <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">Substack post of the same name</a>. This was a mix of hypotheses emerging from my own research and some initial brainstorming with Anthropic&#8217;s Claude &#8212; but it was still primarily based on my own thinking. And it was still relatively underdeveloped.</p><p>It was the combination of intriguing ideas at this point, and the knowledge that I wanted to dive deeper to test these, that led to me realizing that this was an intriguing test case for a short AI-assisted research project &#8212; albeit one that was focused on developing rigorous ideas and concepts rather than running experiments. </p><p>And this is where the process began.</p><p>The first step in the process was a long conversation with Claude (using Opus 4.5) on what stood the test of a deep and cross-disciplinary literature review in the original Substack post, and what did not. This led to me iteratively checking relevant papers and working with Claude to get a better sense of how and where conversational AI might interact with our epistemic vigilance mechanisms.</p><p>The upshot of these early conversations was a request to Claude to carry out a deep research dive into what we&#8217;d discussed and unearthed, and to produce a detailed and grounded analysis of the ideas and hypotheses, along with links to relevant papers &#8212; all of which were subsequently downloaded for later reference.</p><p>At this point it was apparent that some of my initial ideas held up to scrutiny, while some of them needed adjusting and rethinking. Working with Claude also began to unearth intriguing new connections and ideas.</p><p>The next step was to refine the ideas that were beginning to emerge from the literature, and to start drafting a paper that pulled them all together. For this I set up a new project in Claude that was populated with many of the key papers that had previously been identified as being relevant (frustratingly there were too many to upload them all).</p><p>After further testing and refining the emerging insights and identifying a core set of ideas and arguments, I asked Claude to draft a first version of an academic paper that captured these (this and all subsequent drafts were produced as formatted Word documents).</p><p>It was awful!</p><p>Reading it felt like reading the first paper from a new PhD student where they still believed academic-sounding language was the equivalent of robust scholarship. The language that Claude used sounded academic at first blush, but was ultimately superficial and hollow &#8212; fluff masquerading as substance.</p><p>I started line-editing the draft paper, but gave up after the second page and a bunch of very pointed comments. Instead of continuing, I gave the partially annotated document back to Claude, let the LLM know in no uncertain terms what I thought of its attempt, and provided rather unvarnished instructions on what I expected of it &#8212; especially when it came to scholarship and academic rigor.</p><p>The next draft was substantially better.</p><p>Unlike the first draft, there were interesting new ideas in the second version that were well developed and justified, together with well-argued concepts that built on and extended my initial thinking. In fact it was so much improved that, rather than line edit myself, I went straight to &#8220;peer review&#8221; &#8212; using a new Claude session in this case as my highly critical academic peer reviewer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>The four pages of review comments were critical but constructive &#8212; and from my read of the draft, very much on point.</p><p>I gave these back to Claude within the research project, and asked for a revised draft. What came back was better still, and was responsive to the feedback. But it still wasn&#8217;t where I felt the paper needed to be. And so we went through a second round of Claude as a critical peer reviewer.</p><p>Again, my assessment of the feedback was that it was on point. And so once again I provided the feedback to Claude and asked for a new draft in response.</p><p>The result was a draft paper that was good. Very good in fact.</p><p>At this point I thought we&#8217;d progressed to the point that I could once again take over the editorial reigns and dive in with detailed line edits.</p><p>These were substantive, and addressed the core concepts in the paper and the evidence supporting them, as well as what wasn&#8217;t working and what needed more work more generally. My feedback was very much in line with what I would have provided an accomplished grad student co-author.</p><p>Following this feedback (all using comments and track changes in a Word doc) Claude and I went through one further draft-line edit-draft cycle before I felt that the manuscript was robust enough for final fact checking and editing. </p><p>My next step was to download all available cited works (all but two were available &#8212; one that wasn&#8217;t was a paper I am familiar with, the other was a book that I obtained separately) and carefully check each source and any claims based on it. For this I used a combination of good old fashioned human scholarship with repeated checks using fresh chats with Claude.</p><p>Finally, the manuscript underwent a final set of checks and edits by myself to make sure everything held together and made sense, before submitting it to arXiv.</p><p>The whole process took around two days. For me it was a substantial intellectual and editorial lift &#8212; this was not a &#8220;press and post&#8221; paper by any means. At the same time, 2 days from idea to preprint is a crazily short period of time for an academic paper. </p><p>To have done all of this work manually would have taken weeks. And even then, I&#8217;m not convinced that I&#8217;d have produced something as robust and useful as what emerged from the AI-assisted process.</p><h2>The Reflection</h2><p>So what was the upshot of this exercise for me?</p><p>First off, it&#8217;s easy for me to see from this experience how using AI can substantially elevate the speed and quality of scholarship. Using Claude as a research and writing tool vastly accelerated the rate at which I could work, without me feeling as if I&#8217;d lost intellectual control.</p><p>In many ways, the process mimicked collaborating with a talented grad student or postdoc. The difference, of course, being that the AI could draw on vastly more cross-disciplinary resources and insights than any grad student could, and do so much, much faster than a human collaborator.</p><p>But this also left me feeling slightly uneasy. If I was working with a human collaborator, their name would be on the paper and their intellectual contribution acknowledged. And without a doubt, there was a form of intellectual contribution from Claude here &#8212; albeit one that was realized through my active involvement. For instance, the concept of honest non-signals came from Claude, as did the development and refinement of the various mechanisms by which conversational AI might slip by our epistemic vigilance mechanisms.</p><p>On the other hand, the resulting paper also has my intellectual fingerprints all over it. In some cases I provided a direct steer to Claude &#8212; the analogy with human immune responses for instance, and the exploration of how this work aligns with other approaches to AI risks and safety. </p><p>Objectively, and if seen purely through the lens of knowledge contributions, the paper makes a contribution to thinking and understanding around AI-human interactions. And this is a contribution that I believe is valuable.  </p><p>More subjectively though, it&#8217;s a contribution that I can&#8217;t take full credit for. And herein lies a tension between academic outputs as self-serving indicators of success, and outward-facing sources of public good. </p><p>This is perhaps one of my biggest takeaways from the exercise. Using AI as an <em>academic profile-padder</em> is something I still find distasteful &#8212; even though it&#8217;s never been easier to churn out new papers by the dozen using artificial intelligence. And yet, AI-assisted discovery and insights as a <em>public good</em> feels like something we should be embracing &#8230; as long as we can work out how to ensure the latter without the hollow self-aggrandizement of the former.</p><p>That said, I do have one further niggling worry about this whole exercise. And that is this: If AI is so good at evading our epistemic vigilance mechanisms, how do I know I&#8217;m not an unwitting victim here?</p><p>And maybe this is where we still very much need a whole community of humans-in-the-loop as AI-assisted research and AI-generated papers become increasingly prevalent &#8212; all operating as a collective form of epistemic vigilance! </p><p>This, it seems, would make for a valuable follow-on research project.</p><p>Claude? &#8230;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Just this past week a new paper in Nature Portfolio examined the impact of AI on the impact of scientists&#8217; work. While the emphasis of the paper is on scientific discovery,  the authors noted that &#8220;[r]ecent developments in large language models have also become increasingly used to assist scientific writing.&#8221; They also note that the use of LLMs &#8220;raise concerns about weakened confidence in AI-generated content.&#8221;  Hao, Q., Xu, F., Li, Y. <em>et al.</em> Artificial intelligence tools expand scientists&#8217; impact but contract science&#8217;s focus. <em>Nature</em> (2026). <a href="http://Hao, Q., Xu, F., Li, Y. et al. Artificial intelligence tools expand scientists&#8217; impact but contract science&#8217;s focus. Nature (2026). https://doi.org/10.1038/s41586-025-09922-y">https://doi.org/10.1038/s41586-025-09922-y</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Hao and colleagues in the Nature Portfolio paper above found that over the past few decades the use of AI in scientific research has substantially increased the impact of scientists, and that generative AI seems to be accelerating this. At the same time they found that use of AI is narrowing the focus of research and discovery, and reducing scientific engagement. The paper was researched using AI.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Technically it&#8217;s the second. I have a rather cheeky and 100% AI-written paper that was submitted to arXiv before the one discussed here. However, given it&#8217;s rather unconventional nature, it&#8217;s still in a holding pattern with the moderators there!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>One criticism at this point is that using Claude to critique Claude would seem rather circular and incestuous. And indeed there is a danger that inherent biases in the model lead to weak ideas being reinforced. However, my experience is that these models are at a level of sophistication that a new session has sufficient independence when augmented by human expert insight to provide valuable critical feedback.</p></div></div>]]></content:encoded></item></channel></rss>