<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[David at SenTeGuard]]></title><description><![CDATA[Founder of SenTeGuard. Regain Control of your Ideas]]></description><link>https://www.letters.senteguard.com</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 11:07:17 GMT</lastBuildDate><atom:link href="https://www.letters.senteguard.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[SenTeGuard]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[davidsente@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[davidsente@substack.com]]></itunes:email><itunes:name><![CDATA[David]]></itunes:name></itunes:owner><itunes:author><![CDATA[David]]></itunes:author><googleplay:owner><![CDATA[davidsente@substack.com]]></googleplay:owner><googleplay:email><![CDATA[davidsente@substack.com]]></googleplay:email><googleplay:author><![CDATA[David]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[SenTeGuard Update - March 2026]]></title><description><![CDATA[Thank you to those who have signed up since my last update.]]></description><link>https://www.letters.senteguard.com/p/senteguard-update-march-2026</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/senteguard-update-march-2026</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Mon, 23 Mar 2026 12:32:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!au9C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15595b1a-6a9e-4dd6-adcc-bb36c4acb1fd_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Thank you to those who have signed up since my last update. Please share if you know of anyone who would be interested. If this newsletter is connected to a &#8220;.edu&#8221; you will lose soon, and you would like to continue receiving it, feel free to add another email.</p><p><strong>On Request:</strong></p><ul><li><p>Prototype testing of the <a href="https://senteguard.com/blog/sensitive-information-reachability-the-problem-and-the-solution-1768745959993">Moyo </a>information space mapper. Find leaks of your controlled information (classified, proprietary, personal) in public LLMs.</p></li><li><p>SenTeGuard Pilot.</p></li></ul><p><strong>Broad Policy Articles:</strong></p><ol><li><p><a href="https://studentreview.hks.harvard.edu/wrangling-with-explosive-ai-growth/">Wrangling With Explosive Growth </a><a href="https://www.letters.senteguard.com/p/wrangling-with-explosive-growth-harvard">Substack</a><br>Article published in the Harvard Kennedy School Policy Review last month. I argue that while the pace of AI development can feel unprecedented and unsettling, periods of rapid, seemingly unconstrained technological growth are not new. How have we addressed unconstrained growth in the past and how can we do so now?</p></li></ol><ol start="2"><li><p><a href="https://senteguard.com/blog/gaps-in-meaning">Cyborg Scholars</a> <a href="https://www.letters.senteguard.com/p/cyborg-scholars">Substack</a></p><p>Software engineers have been able to incorporate LLMs into their workflows due to looser traditions of attribution (they copy and paste a lot). The article discusses how strict attribution standards in other fields have impeded growth and why loosening them may lead to faster growth of knowledge.</p></li></ol><ol start="3"><li><p><a href="https://senteguard.com/blog/gaps-in-meaning">Large Language Models and Gaps in Meaning (Theory)</a> <a href="https://www.letters.senteguard.com/p/large-language-models-and-gaps-in">Substack</a></p><p>I discuss some of the structural limitations LLMs face in representing ideas using human language.</p></li></ol><p><strong>SenTe Focused Articles:</strong></p><ol><li><p><a href="https://senteguard.com/blog/cognitive-security-standards-concept-draft-1768719750448">Cognitive Security Standards (CSS)</a>. The topic of my Harvard Kennedy School culminating Policy Analysis Exercise. I am building a standards of best practices to prevent leakage of protected information (classified, proprietary, personal). Will publish fully in the coming months.</p></li><li><p><a href="https://senteguard.com/blog/pagerank-and-sente-frameworks-for-new-paradigms">PageRank for Inference: Mapping Reachability in LLM Systems</a> <a href="https://www.letters.senteguard.com/p/pagerank-for-inference-mapping-reachability">Substack</a></p><p>Google&#8217;s central thesis was to bring order to a disordered and chaotic network of internet links. SenTeGuard&#8217;s mission is to bring order to a chaotic space: what LLMs can know and how fast will they learn.</p></li></ol><p></p><p><strong>Coming Soon</strong>: Joseki. Shareable rubrics for building with and breaking models.</p>]]></content:encoded></item><item><title><![CDATA[Cyborg Scholars]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/cyborg-scholars</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/cyborg-scholars</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Mon, 23 Mar 2026 04:03:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!au9C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15595b1a-6a9e-4dd6-adcc-bb36c4acb1fd_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://senteguard.com/blog/cyborg-scholars-1769443602945">Original</a></p><p><br>Large Language Model (LLM)-enhanced authorship is accelerating at an extraordinary pace. Within academia, the share of papers crediting an LLM tool or model has grown exponentially since 2023. In software development, over half of all new code commits are now LLM-assisted. Largely due to LLM assistance, the rate of knowledge production has never been higher. The intelligence explosion will not be constrained by the limits of LLM capability, but by our cultural norms around attribution and by linguistic gatekeeping. Although intended to control the quality of academic work, traditional ideas of authorship within many disciplines may instead act as a buffer, diminishing the potential for human knowledge growth. <br></p><p>The accelerating capability of LLM systems to generate scholarly text highlights a longstanding tension within academia: the dependence on clearly identifiable human authorship as a basis for credibility. Universities and journals currently restrict LLM co-authorship, citing questions of accountability, transparency, and research ethics. These concerns are grounded in the principle that scholarly claims must be traceable to a responsible agent who can defend the work.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Legacy Attitudes Towards Attribution<br></h2><p>Recent public discussions surrounding citation and attribution practices across academia have demonstrated that authorship norms have always involved collaboration, borrowing, and iterative drafting to varying degrees. Committee-produced writing, multi-author workflows, and the role of research assistants and editorial staff have long contributed to the final scholarly voice. The result is paradoxical: LLMs can make knowledge creation faster and clearer than ever, yet systems designed to ensure trust and credit are slowing its publication.<br></p><p>This conflict has played out very differently in software engineering. There, authorship is secondary to utility. Copying, pasting, and reusing existing code is not simply tolerated, it is the norm. Attribution norms are weaker not because developers lack ethics, but because their incentives are aligned around functionality. This norm makes software uniquely suited to rapid LLM integration, because LLM code assistants are a continuation of a long-standing culture of reuse. GitHub Copilot, for instance, builds on decades of norms around forking, patching, and sharing code with minimal concern for original authorship. As a result, software R&amp;D will outpace other disciplines due to relaxed provenance norms.<br></p><p>In 1997, Garry Kasparov became the first world chess champion to lose a match to a computer. The machine, Deep Blue, used brute-force computation combined with heuristic evaluation in what was an early instance of machine learning. No human has defeated a cutting-edge chess engine since. However, even as humans lost their dominance in pure play, they have been successful against those same machines when playing in a human-machine pair. Competing alongside machines in a style known as <em>cyborg chess</em>, they routinely outperform both human grandmasters and standalone AI systems. This model offers a lesson for other domains of knowledge. The scholars of the future may become &#8220;cyborg scholars.&#8221; Their strength will not lie in generating ideas faster than machines, but in discerning which of those ideas are worth pursuing.<br></p><h2>LLMs as a Lingua Franca<br></h2><p>We should consider some of the advantages of LLM co-authorship. The most direct is the massive creative capability LLMs can offer. LLMs can facilitate brainstorming, assess dispersed datasets, or conduct targeted literature reviews in seconds. They are not replacements for human thought, but enhancers.<br></p><p>A second advantage is that AI tools flatten linguistic barriers. With the aid of LLMs, non-native English speakers can contribute more effectively to academic publishing without years of immersion in academic English or dependence on English-speaking co-authors. <em>Nature</em>, for instance, recently noted a sharp increase in manuscript submissions from non-Anglophone regions correlated with the adoption of LLM-based writing tools. This does not replace subject expertise. Rather, it allows researchers to communicate their contributions more clearly across linguistic and cultural boundaries.<br></p><p>This benefit extends beyond non-native speakers. Even native English speakers who do not write according to the grammars or stylistic mores of elite institutions can now participate more easily in specialized discourse. An economist may use an AI assistant to adapt language for a history journal. A sociologist might adjust verbiage for a technical publication. Perhaps even a high school-educated plumber could contribute to an occupational safety journal. For better or worse, those without the cultural background can now spoof the linguistic shibboleths that once served as informal barriers to membership.<br></p><p>We should use this moment to ask how many of our norms around communication exist to ensure clarity, and how many simply reinforce hierarchies of access. A wider acceptance of AI co-authorship could lead to genuine epistemic democratization: access to creation no longer mediated by elite English-speaking institutions, and a reorientation of academic hierarchy away from aristocratic standards of legitimacy and toward meritocratic ones. The lingua franca for academics may no longer be academic English, but frontier LLMs used as a medium to exchange ideas freely across language, nation, and social class.<br></p><h2>Traditions of Delegation<br></h2><p>Professional knowledge work has long relied on structured delegation. Supreme Court justices have opinions drafted by clerks, generals have orders drafted by staffs, and academics have papers drafted by research assistants. Authorship delegation is nothing new. In each of these cases, the principal&#8217;s role is to provide final judgment and assume liability, not to micromanage the specific language of the document.<br></p><p>We should think of our new LLM assistants in the same way. We can now all be principals, and we may all now employ staff. As principals, our responsibility shifts from wordsmith to idea curator. The central question when publishing should be: <strong>Do these words faithfully express what I intend them to?</strong> While it may detract from personal ego, the best strategy to accelerate the collective pursuit of knowledge is to assume all writing is enhanced. Natural language should be treated as a neutral medium for transmitting ideas, not as an art form to be guarded. &#8220;Cyborg academics&#8221; should be welcomed as the next logical stage of scholarship.<br></p><h2>Aesthetic Caveat<br></h2><p>Within academia, writing is often treated as a transparent vehicle for ideas. But in many fields, the voice of the writer forms part of the intellectual contribution itself. Some scholars are recognizable not only for what they argue, but for how they argue it. Their habits, tone, and sense of emphasis are inseparable from the ideas they advance.<br></p><p>As LLM tools increasingly assist in drafting and refinement, these disciplines must ask to what extent individual voice is central to advancing knowledge. If clarity is all that matters, standardized and perhaps sterile LLM prose may be most practicable. But if expression shapes interpretation, then writers have a responsibility to preserve the qualities that make their work distinctly their own. This might mean intentionally drafting certain sections unaided, maintaining stylistic consistencies across works, or using LLMs with deliberate constraints. Recognition of beauty is essential to the human experience, but we should intentionally bifurcate the aesthetic from the pragmatic.<br></p><p>The intelligence explosion will not be limited by LLM capability, but by our willingness to rethink what authorship means. In software, utility has long triumphed, and code is judged by whether it works, not by who wrote it. Academia may follow, if it can draw a sharper distinction between the medium used to communicate ideas and the ideas themselves. As machines master the craft of expression, the human role will evolve from mere authorship to intellectual design. The LLM can become the craftsman, while the human mind remains the architect of the idea. The future of writing will belong to those who can not only originate meaning, but direct the machine to portray it accurately.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Large Language Models and Gaps in Meaning (Theory)]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/large-language-models-and-gaps-in</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/large-language-models-and-gaps-in</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Mon, 23 Mar 2026 04:01:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!au9C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15595b1a-6a9e-4dd6-adcc-bb36c4acb1fd_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://senteguard.com/blog/gaps-in-meaning">Original</a><br><br>At the Tel Aviv AI conference, I watched a presenter build an AI music video with Suno in real time. The presenter nudged prompts, regenerated, tweaked, and regenerated again, not because they could fully explain what was missing, but because they could feel it. The groove was slightly off. The texture was too glossy. The difference between &#8220;close&#8221; and &#8220;right&#8221; was obvious to a musician and frustratingly hard to name.<br></p><p>That moment highlighted something artists know well. Musicians often operate on feel, and they are frequently at a loss for words when asked to describe the feeling they are trying to deliver. They navigate by small edits, guided by an internal objective that is stable enough to steer their work, but not always easy to compress into explicit language. There are meaningful distinctions we can reliably perceive and act on even when we cannot cleanly articulate them.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I wanted to take that observation and generalize it beyond music. In many domains, there are ideas we cannot define crisply in language, even when we can recognize them, compare them, or move toward them by iterative refinement. We may have a sense that an argument is stronger before we can specify why. We may know a conversation is off in tone without being able to formalize the defect. We may detect a contradiction in a narrative &#8220;shape&#8221; before we can point to the sentence that caused it. The gap between what humans can mean and what humans can precisely say is directly connected to the structure of large language models, because those models are trained and operated through discrete language.<br></p><p>My paper is an attempt to describe that gap mathematically.<br></p><h2>The core idea<br></h2><p>Large language models manipulate <strong>discrete token sequences</strong>, but human meanings behave like points in a <strong>continuous space</strong>. The mismatch forces any model to represent meaning as a <strong>sparse, distorted sampling</strong> of what humans can mean. Some regions of meaning become unreachable, unstable, or hard to verify, even if they are obvious to people.<br></p><h2>Why it&#8217;s important<br></h2><p>A lot of discussion about LLM limitations stays at the surface level: hallucinations, brittleness, prompt sensitivity, or lack of grounding. These symptoms often get treated as engineering glitches rather than structural constraints.<br></p><p>My claim is stronger. Some failures are not bugs that disappear with better prompts. Some failures are not even problems of insufficient intelligence. They are consequences of mapping a continuous target, human meaning, onto a discrete interface, tokens, with a finite internal representation.<br></p><p>If you want to build systems that are reliable, steerable, or safe, you need a vocabulary for these structural gaps.<br></p><h2>Plain language explanation<br></h2><p>The paper builds a three-part picture and then uses it to explain where LLM behavior breaks.<br></p><h3>1) Token space: what the model literally sees<br></h3><p>An LLM is trained on sequences drawn from a finite vocabulary. Even though the number of possible strings is enormous, the set is still fundamentally combinatorial. In any single call, the context window limits the model to a finite set of possible inputs.<br></p><p>This gives the first constraint: the model&#8217;s interface is discrete and bounded.<br></p><h3>2) Meaning manifold: what humans navigate<br></h3><p>Humans experience meaning differently than token strings. Meanings have neighborhoods and smooth variation. You can soften a claim slightly, make an instruction more urgent, shift an emotional tone, or refine an aesthetic feel.<br></p><p>These are not naturally modeled as jumps between discrete symbols. They behave more like motion in a continuous space, which the paper calls the meaning manifold.<br></p><p>The music example is a particularly vivid case. &#8220;More intimate&#8221; or &#8220;less glossy&#8221; is meaningful and actionable, but often hard to define precisely in words.<br></p><h3>3) Discrete semantic manifold: what the model can actually represent<br></h3><p>Inside the model, we talk about embeddings as vectors. But the model is still a finite machine with finite parameters. That means it cannot realize a perfect continuous image of meaning. What it realizes is a discrete cloud of representable internal states.<br></p><p>The paper calls that cloud the discrete semantic manifold. It is the set of internal states the model can actually visit and use.<br></p><h2>Separating representation from endorsement<br></h2><p>A common mistake is to treat &#8220;the model produced it&#8221; as &#8220;the model knows it.&#8221; This paper splits that into two steps.<br></p><p>First is representation. Did the model land in a state that corresponds to a stable human meaning at all?<br></p><p>Second is endorsement. Even if it corresponds to a meaning, is it a meaning the system should accept under verification?<br></p><p>That distinction becomes a formal tool in the paper.<br></p><h2>A guided tour of the argument<br></h2><p>The paper begins with discreteness. Token strings form a countable space, and per call the context window makes the relevant input set finite. Even before we talk about intelligence, there is already an interface mismatch with human meaning.<br></p><p>It then introduces the meaning manifold as an ideal target. The manifold captures semantic neighborhoods, smooth variation, and fuzzy boundaries.<br></p><p>Next, it defines the model&#8217;s realized semantic space as discrete. For a fixed architecture and context limit, the set of internal states the model can produce is effectively finite. This makes the tension between discrete representation and continuous meaning explicit.<br></p><p>To connect model states to human meanings, the paper introduces a conceptual projection from internal states to points on the meaning manifold. This projection can be many to one, partial, and non-surjective. Those three properties correspond to redundancy, incoherence, and unreachable meanings in practice.<br></p><p>The paper then defines an operational information space using a generator and a verifier. This separates what the model can produce from what the system can produce and endorse under checks. Many hallucinations live in the gap between those two sets.<br></p><h2>Returning to music<br></h2><p>Music provides a concrete anchor for why the framework matters. Musicians often know exactly what they want and can move toward it through incremental edits, even when they cannot describe it precisely. This shows that humans can navigate meaning spaces that are only partially expressible in language.<br></p><p>Music also highlights the difficulty of verification. For aesthetic and cultural meaning, verification is often subjective, community-dependent, and unstable over time. That makes the boundary of what counts as acceptable output fuzzy and expensive to define.<br></p><p>This is the broader gap the paper is trying to formalize. It is not only a representational gap between tokens and meaning, but also a verification gap between what can be generated and what can be reliably endorsed.<br></p><h2>What to watch for in the full paper<br></h2><p>As you read the full draft, keep three distinctions in view.<br></p><p>First, the difference between a discrete interface and a continuous target. Tokens are discrete, but meaning behaves continuously.<br></p><p>Second, the difference between representable and unreachable meanings. Some meanings are structurally missing from the model&#8217;s sampling, not just difficult to reach.<br></p><p>Third, the difference between meaningful and endorsed outputs. A sentence can express a clear meaning and still fail verification.<br></p><h2>Closing thought<br></h2><p>The Tel Aviv demo made visible something easy to miss when thinking only in terms of language. Many of the most important human judgments are not discrete propositions. They are directions, gradients, and neighborhoods in a space we can feel our way through.<br></p><p>Large language models can be extremely powerful within the regions of that space they sample well. But the structure of tokens, finite context, and finite representation means they will always leave gaps. The goal of this paper is to describe those gaps clearly enough that we can reason about them, measure them, and design around them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[PageRank for Inference: Mapping Reachability in LLM Systems]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/pagerank-for-inference-mapping-reachability</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/pagerank-for-inference-mapping-reachability</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Thu, 19 Feb 2026 03:31:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!au9C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15595b1a-6a9e-4dd6-adcc-bb36c4acb1fd_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://senteguard.com/blog/pagerank-and-sente-frameworks-for-new-paradigms">Original</a><br><br>In every major computing era, new capabilities create a new kind of complexity and there is always opportunity in figuring out how to visualize it and navigate it. In the late 1990s, the web was exploding in size, but it was hard to know what to trust or where to start until Google made the link graph not just measurable but navigable with PageRank. PageRank did not just score authority; it created a usable interface that turned chaos into confidence.<br></p><p>About a decade later, AWS was not just &#8220;renting servers.&#8221; It made infrastructure understandable and operable through standard building blocks, APIs, and monitoring, so teams could provision and manage systems deliberately instead of by guesswork. In each case, the winners were the ones who build the maps, metrics, and interfaces that turn a chaotic new substrate into something people can use with confidence. At SenTeGuard our mission is to make sense of the new LLM information environment.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Example<br></h2><p>When a Fortune 500 company deploys an LLM across its knowledge base, what can it infer about merger plans from scattered financial reports, calendar patterns, and organizational changes? What trade secrets become visible when the model connects technical documents with supplier emails and hiring patterns?<br></p><p>No one knows and organizations that cannot answer these questions cannot safely deploy AI at scale. Without visibility into what becomes inferable, companies face a choice between artificial constraint or uncontrolled exposure. Organizations that ignore this problem will leak intellectual property through inference, face regulatory exposure from unexpected data combinations, and cede competitive advantage to those who can deploy AI systems with confidence rather than caution.<br></p><h2>Reachability as a New Risk Surface<br></h2><p>LLMs introduce a new kind of complexity. They take scattered fragments across a corpus and make them coherent, not just by retrieving what is already written, but by stitching together implications, filling in missing steps, and surfacing conclusions that were never explicitly stated.<br></p><p>This is <strong>reachability</strong>: what an LLM can conclude by connecting fragments across your data, even when those conclusions were never written down.<br></p><p>As models improve and their working context expands, the frontier of what can be reached from the same underlying material grows faster than intuition can track. Traditional security assumes data is either accessible or it is not. LLMs break that model. They make inference itself an exfiltration channel. Nothing needs to be stolen if the system can reconstruct sensitive conclusions from scattered signals.<br></p><h2>The Missing Layer in the LLM Era<br></h2><p>The LLM era needs the equivalent of what PageRank and AWS were for their breakthroughs: maps and metrics that make a chaotic information environment legible.<br></p><p>SenTeGuard&#8217;s thesis is that information reachability is not a temporary patch but inherent to LLMs as a platform. Models will not solve it on their own because reachability is structural. The default trajectory is expanded reachability, and the only question is whether you can see it happening and whether you can bound it intentionally.<br></p><p>Our response is an integrated platform with three layers (and counting) that work together to make the LLM information environment visible, controllable, and operational.<br></p><h2>Moyo: The Mapping Layer<br></h2><p><strong>Moyo</strong> is the mapping layer. It is built to answer the hardest question in LLM security:<br></p><blockquote><p><em>What becomes inferable when you combine these sources?</em><br></p></blockquote><p>Moyo treats inference as an exfiltration channel and helps organizations model their information environment as a reachable space. It runs tests that probe what an LLM can infer from a base corpus and produces legible outputs that show where exposure is growing and where controls are working.</p><p>&#8212; When a company combines its hiring database with Slack archives, Moyo shows that the LLM can now infer which executives are likely to be terminated.<br>&#8212; When engineering docs meet customer support tickets, Moyo reveals what product vulnerabilities become visible.<br></p><p>Moyo creates the PageRank equivalent for inference risk: a usable interface that makes reachability navigable.<br></p><h2>SenTeGuard: The Enforcement Layer<br></h2><p><strong>SenTeGuard</strong> is the enforcement layer. It sits where humans and systems actually touch LLMs&#8212;documents, prompts, workflows, and connectors&#8212;and reduces exposure at the point of use.</p><p>&#8212; When a developer pastes code into an LLM, SenTeGuard blocks the API key embedded in line 47 before it reaches the model.<br>&#8212; It helps organizations prevent sensitive data from entering unsafe contexts.<br>&#8212; It detects high-risk joins where separate domains get combined in ways that create new conclusions.<br>&#8212; It applies policy to real workflows rather than abstract rules.<br></p><p>If Moyo shows you where the boundary is, SenTeGuard enforces it.<br></p><h2>Joseki:Wrapperhub &#8212; Integration and Orchestration<br></h2><p><strong>Joseki:Wrapperhub</strong> is the integration and orchestration layer that makes the messy middle legible.<br></p><p>In practice, LLM use does not happen in a single prompt box. It happens across wrappers, agents, connectors, routing logic, tool calls, retries, and a growing pile of glue code that quietly becomes your real product surface.<br></p><p>Joseki:Wrapperhub centralizes that surface.<br></p><p>It standardizes how models are invoked, how tools are exposed, and how context is assembled, so behavior is consistent enough to reason about and evolve. It also creates a single place where guardrails, logging, and evaluation hooks can live, turning &#8220;a bunch of LLM experiments&#8221; into an operational system you can instrument, compare, and improve over time.<br></p><h2>From Experiments to Infrastructure<br></h2><p>This field is new, and the problems change weekly because the platform changes weekly. Model capabilities rise. Retrieval improves. Tool use grows.<br></p><p>As models become embedded across regulated and high-stakes environments, the need for legible reachability maps and enforceable boundaries becomes foundational infrastructure. As LLMs move from experiments to infrastructure, organizations need the same confidence in their AI environment that AWS gave them for cloud resources.<br></p><p>That is what we are building.<br></p><h2>Mission<br></h2><p>SenTeGuard&#8217;s mission is to make the LLM information environment legible and governable. We build the maps and metrics that turn AI risk from vibes into engineering.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Wrangling With Explosive Growth Harvard Kennedy School Policy Review]]></title><description><![CDATA[My piece in the Harvard Kennedy School Student Policy review was published here https://studentreview.hks.harvard.edu/wrangling-with-explosive-ai-growth/. I argue that while the pace of AI development can feel unprecedented and unsettling, periods of rapid, seemingly unconstrained technological growth are not new. The Industrial Revolution and the Information Age both triggered anxiety, disruption, and real harm alongside extraordinary gains. Looking at how societies responded to those transitions offers useful lessons for how we can govern, adapt to, and benefit from today&#8217;s AI acceleration without assuming that fear or fatalism are our only options.]]></description><link>https://www.letters.senteguard.com/p/wrangling-with-explosive-growth-harvard</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/wrangling-with-explosive-growth-harvard</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 03 Feb 2026 14:40:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!au9C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15595b1a-6a9e-4dd6-adcc-bb36c4acb1fd_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My piece in the Harvard Kennedy School Student Policy review was published here <a href="https://studentreview.hks.harvard.edu/wrangling-with-explosive-ai-growth/">https://studentreview.hks.harvard.edu/wrangling-with-explosive-ai-growth/</a>. I argue that while the pace of AI development can feel unprecedented and unsettling, periods of rapid, seemingly unconstrained technological growth are not new. The Industrial Revolution and the Information Age both triggered anxiety, disruption, and real harm alongside extraordinary gains. Looking at how societies responded to those transitions offers useful lessons for how we can govern, adapt to, and benefit from today&#8217;s AI acceleration without assuming that fear or fatalism are our only options.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cognitive Security Standards — Statement of Purpose (Draft)]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/cognitive-security-standards-statement</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/cognitive-security-standards-statement</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:57:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cVeM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-e4VrKbeYBRLDjZn7ZnI6">Original</a></strong></p><h2><strong>Why Cognitive Security Standards?<br></strong></h2><p>Cognitive security standards are needed because the core risk has shifted from <strong>who can access a file</strong> to <strong>what can be inferred from a set of information once it touches AI systems</strong>. In the classic security world, you can draw boundaries (networks, roles, classification levels, &#8220;don&#8217;t copy this into that tool&#8221;) and mostly trust that separation works. But LLMs collapse those boundaries by turning scattered text into a single substrate that can be searched, summarized and recombined.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That means sensitive concepts can be &#8220;leaked&#8221; without anyone ever stealing the original document: the model or an AI-enabled workflow can synthesize the protected idea from harmless-looking fragments, logs, prompts, tickets, chat transcripts, or retrieved snippets. Standards give organizations a shared way to define and measure this new kind of exposure: <strong>semantic leakage</strong>, <strong>inference reachability</strong>, and the conditions under which AI systems can reconstruct what you meant to keep compartmented.<br></p><p>We also need cognitive security standards because the adoption pattern is now <strong>ambient AI</strong>: models show up inside email, docs, IDEs, browsers, customer support, meeting notes, and internal search, often added faster than governance can keep up. Without standards, every team improvises, vendors grade themselves, and audits focus on familiar but incomplete controls (access lists, data labels, retention) while missing the real failure mode: meaning crossing boundaries.<br></p><p>A good standard would do for AI-era leakage what frameworks like SOC 2 or ISO did for earlier eras of security: define common controls, test methods, and reporting norms. It would help CISOs compare tools, help engineers build safer architectures by default, and help compliance teams document that they&#8217;re managing not just data exfiltration, but inference and synthesis risk across the full AI-enabled workflow.<br></p><h2><strong>Semantic Leakage and Reachability<br></strong></h2><p>Traditional information security assumes that separation works. We put data in compartments (HIPAA systems vs. non-HIPAA systems, TS/SCI vs. Secret, proprietary source code vs. customer-facing knowledge base) and then enforce access control at the boundaries. LLMs complicate this because they turn text and meaning into a single computational substrate. Once information is embedded into a model&#8217;s training set, a vector index, a prompt context, or a long-running memory, the question is no longer just who can access which file but what a system can infer.<br></p><p>My core claim is that LLMs create a <strong>reachability problem</strong>. Once a base set of facts exists inside, or adjacent to, an LLM application, the set of reachable conclusions expands nonlinearly as capability increases, context windows expand, and retrieval tools improve. This is why domain barriers (classification boundaries, privilege tiers, proprietary walls) risk being blurred and eroded, not because someone exfiltrates the original file, but because the system becomes able to synthesize the protected conclusion.<br></p><h2><strong>Exfiltration vs. Infiltration<br></strong></h2><p>To keep terms clean, this section treats semantic leakage as two distinct security failure modes.<br></p><p><strong>Exfiltration risk (outbound leakage):</strong> A user emits protected information (regulated, classified, privileged, proprietary, or contractually restricted) to a data space which could be used to train future LLMs, either directly (verbatim or near-verbatim) or indirectly (paraphrase, reconstruction, high-confidence inference). This is the familiar sensitive information disclosure problem framed for LLMs.<br></p><p><strong>Infiltration risk (unauthorized domain reach):</strong> An LLM reaches an information domain it should not be able to reach through reasoning, aggregation, or mosaic synthesis, such that an unclassified or low-privilege system can output conclusions that are functionally in a higher domain (for example, TS-equivalent insights) even though it never had explicit authorized access to TS files.<br></p><p>In other words, infiltration is not about poisoning or tampering. It is about crossing an epistemic boundary. It is the ability of the system to climb into a domain by joining and inferring across sources that were individually permitted. Both risks undermine domain barriers, but they break them differently. Exfiltration is a disclosure failure. Infiltration is a boundary maintenance failure.<br></p><h2><strong>Reachable information<br></strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cVeM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cVeM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 424w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 848w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 1272w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cVeM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png" width="824" height="615" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:615,&quot;width&quot;:824,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!cVeM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 424w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 848w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 1272w, https://substackcdn.com/image/fetch/$s_!cVeM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F983f22e0-f675-43b6-b26c-de12ba3fc4bd_824x615.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The above image is a G&#246;del-inspired diagram, popularized in Hofstadter&#8217;s <em>G&#246;del, Escher, Bach</em>, where axioms at the trunk produce theorems (reachable statements) branching outward, and large regions remain labeled as unreachable truths and unreachable falsehoods across the space of well-formed formulas. This picture is useful for LLM InfoSec because it shows why semantic leakage could be seen as a structural problem.<br></p><p>In this framing, the axioms are the system&#8217;s base information: training corpora, connected repositories, indexed RAG stores, tool-accessible data, and memories. The theorems are the set of reachable outputs: what the model can reliably produce from that base. As you add axioms (data sources), you expand the reachable theorem space. As the model becomes more capable, it can traverse more complex proof paths, meaning more joins, more inference chains, and more plausible reconstructions.<br></p><p>The legal and compliance bottom line is that under semantic integration, it is no longer safe to say we did not disclose X if the system can derive X from what it did disclose, or from what it was permitted to access in pieces. That is why CSS treats domain boundaries as something you must operationalize and test. If you cannot prove certain classes of statements remain unreachable, you do not have a boundary.<br></p><h2><strong>Ordinary Use Cases<br></strong></h2><p>Most semantic leakage is not going to begin with a state actor breaking airgaps. It begins with a normal person doing a normal thing. A user pastes sensitive text into a cloud LLM because it is the easiest way to draft, summarize, or translate something. The same convenience pattern shows up elsewhere: people paste internal fragments into Google queries, copy sensitive snippets into cloud notes, or drag documents into synced storage like OneDrive so they can work across devices.<br></p><p>The legal and InfoSec difficulty is that these acts can turn a local confidentiality obligation into a third-party processing event. Safeguarding classified or proprietary information does not usually mean trusting Google or OpenAI to safeguard classified information. It means you design systems so that the data never enters an ungoverned third-party plane in the first place, and you assume the default consumer workflow is not your security boundary.<br></p><p>OpenAI&#8217;s consumer-facing ChatGPT has multiple data modes that matter for risk analysis. Consumer ChatGPT content may be used to improve models by default, with user controls to opt out. Temporary Chats are described as not being used to train models. OpenAI also states that it does not train on API data by default, and that business offerings like ChatGPT Business and ChatGPT Enterprise are not trained on by default.<br></p><p>Even if a vendor policy is privacy-forward, the compliance reality remains: policies change, datasets get retained for abuse monitoring or legal holds, and breaches happen. This is why CSS includes a cloud prompting control family, not because consumer LLMs are uniquely evil, but because they are the most common ungoverned connector between protected domains and public model ecosystems.<br></p><h2><strong>Relevant Regulatory Frameworks<br></strong></h2><p>CSS will not be a privacy statute. It is a security and assurance standard meant to create a legible standard of care for information leakage and boundary erosion in LLM-enabled systems. That said, the standard is designed to be usable inside existing legal frameworks that already care about disclosure and confidentiality.<br></p><p>&#8212; <strong>HIPAA Security Rule (ePHI):</strong> requires reasonable and appropriate administrative, physical, and technical safeguards to protect confidentiality, integrity, and availability of electronic protected health information.<br></p><p>&#8212; <strong>42 CFR Part 2 (SUD records):</strong> imposes strict limits on use and disclosure of substance use disorder treatment records; HHS emphasizes constraints on use in investigations and prosecutions without consent or court order.<br></p><p>&#8212; <strong>Deemed exports:</strong> under U.S. export controls, releasing controlled technology or source code to a foreign national inside the United States can be treated as an export to that person&#8217;s home country, which makes LLM-enabled collaboration and tool-assisted drafting an export-control surface, not just an InfoSec surface.<br></p><p>&#8212; <strong>GLBA Safeguards Rule (financial customer information):</strong> requires a written information security program with administrative, technical, and physical safeguards.<br></p><p>&#8212; <strong>FERPA (education records):</strong> restricts disclosure of students&#8217; education records and PII without consent.<br></p><p>&#8212; <strong>GDPR principles (EU):</strong> include integrity and confidentiality and accountability, with purpose limitation and data minimization norms that become strained when prompts and logs become long-lived training artifacts.<br></p><p>&#8212; <strong>CCPA/CPRA (California):</strong> establishes privacy rights and interacts with breach and &#8220;reasonable security&#8221; expectations, including private right of action in certain breach contexts.<br></p><p>&#8212; <strong>FTC Act Section 5 enforcement posture (U.S.):</strong> FTC frequently uses Section 5 (unfair or deceptive acts) in privacy and security enforcement; for LLM deployments, &#8220;we had policies&#8221; without operational controls is the familiar failure mode.<br></p><p>&#8212; <strong>Trade secret law (DTSA/UTSA ecology):</strong> the existence of a trade secret partly depends on reasonable measures to keep it secret; uncontrolled LLM ingestion and uncontrolled cloud prompting can become evidence that secrecy measures were not &#8220;reasonable.&#8221;<br></p><p>&#8212; <strong>ITAR technical data:</strong> &#8220;technical data&#8221; definitions explicitly include classified information relating to defense articles; this matters when engineering teams paste controlled technical details into external tools.<br></p><p>CSS&#8217;s thesis is simple: these regimes already impose confidentiality duties, but they do not define what it means to prevent semantic leakage and domain infiltration. A GAAP-like standard for LLM InfoSec supplies that missing operational layer.<br></p><h2><strong>Some Existing Standards<br></strong></h2><p>Below is a map of existing frameworks and why they are useful, but incomplete, for semantic leakage and reachability risk.<br></p><h3><strong>NIST AI RMF 1.0<br></strong></h3><p>&#8212; <strong>What it is:</strong> Voluntary AI risk management framework (GOVERN / MAP / MEASURE / MANAGE).<br>&#8212; <strong>Strengths:</strong> Governance vocabulary, risk lifecycle, fits enterprise risk programs.<br>&#8212; <strong>Gaps CSS targets:</strong> Not an audit-ready control spec; does not prescribe leakage metrics or test harnesses.<br></p><h3><strong>NIST GenAI Profile (AI 600-1)<br></strong></h3><p>&#8212; <strong>What it is:</strong> GenAI-specific profile aligned to AI RMF.<br>&#8212; <strong>Strengths:</strong> Identifies GenAI risks and actions; helpful crosswalk anchor.<br>&#8212; <strong>Gaps CSS targets:</strong> Still action-oriented guidance; not a SOC-style attestation template with required evidence and thresholds.<br></p><h3><strong>ISO/IEC 42001<br></strong></h3><p>&#8212; <strong>What it is:</strong> AI management system standard (AIMS).<br>&#8212; <strong>Strengths:</strong> Auditable management-system structure; internationally legible governance.<br>&#8212; <strong>Gaps CSS targets:</strong> Management system does not equal leakage controls; does not define domain infiltration tests or metrics.<br></p><h3><strong>CSA AI Controls Matrix (AICM)<br></strong></h3><p>&#8212; <strong>What it is:</strong> 243 control objectives across AI security domains; mapped to ISO and NIST.<br>&#8212; <strong>Strengths:</strong> Control-matrix form is close to what CSS wants; already designed for adoption and mapping.<br>&#8212; <strong>Gaps CSS targets:</strong> Needs a leakage-specific measurement and adversary-testing layer; semantic reachability is not the organizing concept.<br></p><h3><strong>OWASP Top 10 for LLM Apps<br></strong></h3><p>&#8212; <strong>What it is:</strong> Threat list for LLM applications (prompt injection, sensitive info disclosure, etc.).<br>&#8212; <strong>Strengths:</strong> Concrete attack classes; useful for test suites and minimum mitigations.<br>&#8212; <strong>Gaps CSS targets:</strong> Not an assurance standard; does not define maturity levels, evidence catalogs, or audit sampling.<br></p><h3><strong>MITRE ATLAS<br></strong></h3><p>&#8212; <strong>What it is:</strong> Knowledge base of adversary tactics and techniques for AI systems.<br>&#8212; <strong>Strengths:</strong> Best source for adversary emulation grounding; supports red-teaming realism.<br>&#8212; <strong>Gaps CSS targets:</strong> Descriptive, not prescriptive; doesn&#8217;t tell you what &#8220;reasonable leakage prevention&#8221; is.<br></p><h3><strong>Google SAIF<br></strong></h3><p>&#8212; <strong>What it is:</strong> Conceptual framework with core elements for securing AI.<br>&#8212; <strong>Strengths:</strong> Useful secure-by-default framing.<br>&#8212; <strong>Gaps CSS targets:</strong> Not a disclosure or assurance artifact; does not define leakage metrics or standardized tests.<br></p><h2><strong>Most Feasible Adoption Path<br></strong></h2><p>Most feasible adoption path: CSS as a profile or assurance annex that plugs into CSA AICM control mapping and an AICPA SOC-style report structure, while grounding adversary testing in OWASP and MITRE ATLAS.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Living With LLMs Everywhere - How Ambient LLMs Negate Security Policy]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/living-with-llms-everywhere-how-ambient</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/living-with-llms-everywhere-how-ambient</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:53:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!L9vR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-cTdX0IaIRz8STpBU9VYk">Original</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L9vR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L9vR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L9vR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png" width="310" height="310" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:648,&quot;width&quot;:648,&quot;resizeWidth&quot;:310,&quot;bytes&quot;:140529,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/185153147?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L9vR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!L9vR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36ccdf0c-3d0d-44f2-b274-75c12a12e34b_648x648.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>It has become strangely normal to watch a screen write back at you. An email client offers to draft the first paragraph. A meeting ends and a summary appears, neatly packaged with action items. A customer support chat responds instantly, with just enough polish to feel human. Even when you do not go looking for &#8220;AI,&#8221; it has a way of showing up anyway, folded into the tools you already depend on.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You unchecked the &#8220;Improve the Model for Everyone&#8221; box in ChatGPT. Your organization has an agreement with Anthropic. But does that box, or does that agreement, protect you from all instances of what has become a diverse and pervasive LLM presence? Unlikely. LLMs are becoming <em><a href="https://etcjournal.com/2025/12/30/one-word-that-captures-ai-in-2025-ambient/">ambient</a></em> as they embed themselves into every layer of the work environment, and the risk of leaking protected information through them is becoming unavoidable.</p><h2><strong>LLMs live everywhere</strong></h2><p>LLMs are no longer confined to a single website where you knowingly paste text into a chat box. They are being embedded across the everyday stack.</p><h3><strong>Productivity suites</strong></h3><p>&#8212; Built-in drafting, summarizing, and assistance inside office applications: Microsoft (Copilot in Word, Excel, Outlook, Teams)<br>&#8212; Built-in writing help and assistive features across email, documents, and meetings: Google (Gemini in Workspace: Gmail, Docs, Meet)<br>&#8212; Built-in meeting summaries with AI features that may involve third parties: Zoom (AI Companion)</p><h3><strong>Operating systems</strong></h3><p>&#8212; System-level assistant experiences embedded directly into the OS: Microsoft (Copilot in Windows 11)<br>&#8212; System-level writing tools and assistant integration, with optional ChatGPT handoff: Apple (Apple Intelligence across iPhone, iPad, and Mac)<br>&#8212; Default mobile assistant shifting toward an LLM-first interface: Google (Gemini as the evolving assistant layer on Android)</p><h3><strong>Browsers</strong></h3><p>&#8212; Sidebar assistants that summarize and answer in-tab: Microsoft (Copilot in Edge)<br>&#8212; &#8220;AI-first&#8221; browsing positioned as a core feature: Opera, Arc (built-in AI features)</p><p>Open source LLMs are also growing in prevalence, often integrated in innovative and hard-to-predict ways. This further lowers the barrier to widespread deployment and reinforces the reality that LLM interaction is no longer optional or centralized.</p><p>This ubiquitous integration matters because many people approach privacy as an intentional act: &#8220;I will not paste sensitive things into ChatGPT.&#8221; That instinct is not wrong, but it is incomplete. The interfaces are multiplying, and the boundaries are dissolving.</p><h2><strong>Your data footprint is messy</strong></h2><p>Direct retraining from data entered into a prompt box is not the only security or privacy concern. Even if a service does not immediately use your prompts for training, your content can still be retained, logged, reviewed, routed through vendors, or kept for compliance and operational reasons. From there, it can be copied again, forwarded again, and integrated into new systems that were not part of the original risk calculation.</p><p>This creates what can be thought of as a <em>leakage cascade</em>. A leak in one place rarely stays in one place. Even if today&#8217;s frontier model never trains on your prompt, a future frontier model may train on a dataset that now contains it.</p><p>Researchers have warned that the supply of high-quality, publicly available human-written text is finite, with projections that frontier-model training could approach <a href="https://epoch.ai/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data">exhaustion</a> of that public stock within the next several years. When public data becomes scarcer, model trainers face pressure to find new sources, whether by paying for access, relying more heavily on synthetic data, or expanding into data that previously felt out of bounds.</p><p>There is also the reality of policy drift. Promises change. Incentives change. Leadership changes. When you trust cloud services, your ideas are only as safe as the host is liquid. Terms of service written before the LLM boom may not have contemplated a world where &#8220;service improvement&#8221; includes large-scale model development.</p><p>This is why the focus on &#8220;prompts&#8221; misses the structural issue. Your real corpus is not what you type today. It is what you already stored in the cloud, and what a future model ecosystem will be increasingly motivated to reach.</p><h2><strong>The weakest link: employees</strong></h2><p>Even if leadership issues a clear policy, an organization&#8217;s ideas are only as secure as its weakest link. The modern workplace is full of temptations, especially when LLMs promise an easy button and sometimes employees have just not had policies properly communicated to them. Employees have ways of finding unlocked LLMs or unsecured data hubs on their corporate machines.</p><p>In early 2023, Amazon warned employees not to share confidential information with ChatGPT after seeing outputs that closely matched existing internal material. This led Amazon to push employees toward an internal chatbot, <a href="https://www.businessinsider.com/amazon-cedric-safer-ai-chatbot-employees-2024-9">Cedric</a>, positioned as safer than external tools. This response is not unique. <a href="https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/">Samsung</a> temporarily restricted generative AI use on company devices after an employee uploaded sensitive code. And <a href="https://www.reuters.com/technology/google-one-ais-biggest-backers-warns-own-staff-about-chatbots-2023-06-15/">Google</a> has also warned staff about entering confidential materials into chatbots.</p><h2><strong>Protecting yourself while using the best</strong></h2><p>For some organizations, the response has been to build internal models. But not every organization can do this, and even when they do, internal capabilities are often inferior to frontier models. The real question is how to protect yourself when using cutting-edge models you cannot fully trust.</p><h3><strong>Educating the workforce</strong></h3><p>&#8212; Train on concrete &#8220;oops&#8221; scenarios: pasting code to debug, rewriting a sensitive memo, summarizing a client incident, or asking an assistant to &#8220;make this more persuasive&#8221; with proprietary details embedded. SenTeGuard can help.<br>&#8212; Emphasize the mental model: policy compliance is not the goal; consistent judgment under time pressure is.<br>&#8212; Recognize sensitive <em>ideas</em> as well as sensitive <em>data</em>: proprietary code, internal strategy, customer identifiers, vulnerability details, negotiations, or anything you would not forward to a third party by email.<br>&#8212; Treat all user-entered text as if it could be read later, because in many systems it can be retained.</p><h3><strong>Technical solutions</strong></h3><ul><li><p>&#8212; Monitor and prevent leakage in real time.</p></li><li><p>&#8212; Focus on controls that block sensitive content at the moment it tries to leave approved boundaries.<br>&#8212; <strong>Deploy software that is omnipresent and has no lag to prevent idea leakage, not merely detect it after the fact.</strong> SenTeGuard can help.</p></li></ul><p>If LLMs are becoming ambient, then security has to become ambient too. Employees must be aware of risk and controls must match the speed and ubiquity of the tools themselves &#8212; especially on corporate machines where the risk is concentrated and the incentives to cut corners are significant.</p><h2><strong>Conclusion</strong></h2><p>LLMs have been woven into the everyday interfaces that mediate work, communication, and decision making. In that world, unchecking the &#8220;Improve the Model for Everyone&#8221; box is not a privacy policy. It is an empty reassurance. If we want the productivity gains of the best models without surrendering the value of our ideas, we need boundaries, education, and enforcement mechanisms that fit the ambient reality we now live in.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Nailing Jell-O to the Wall, Again. Can China Contain LLMs?]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/nailing-jell-o-to-the-wall-again</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/nailing-jell-o-to-the-wall-again</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:51:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qbTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-jjip31e6y1iTyGKpzso4">Original</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qbTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qbTX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qbTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg" width="1280" height="853" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:853,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:180585,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/185153191?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qbTX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qbTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57a609f8-87c2-43ac-971a-1b67315f97f6_1280x853.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>In 2000, President Bill Clinton famously looked at Beijing&#8217;s early internet controls and quipped: &#8220;Good luck. That&#8217;s sort of like trying to nail Jell-O to the wall.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>So far he&#8217;s been proven wrong. The CCP didn&#8217;t just contain the internet; it has effectively used the internet as a tool to entrench its control by building a system that fuses chokepoints, platform governance, and punitive enforcement into something like a sovereign information utility. That said, the jury is still out, and Clinton may still be vindicated.</p><p>On the one hand, LLMs can be understood as a natural outgrowth of Clinton&#8217;s (and Gore&#8217;s) internet but it can also be seen as its next evolution. LLMs present significant opportunities for economic growth but in pursuing growth they will also amplify individual agency. The Party faces a quandary: pursue a growth strategy and risk an erosion of Party authority or crack down and risk being left behind in the technology of the future.</p><h2><strong>Party Dependence on Growth</strong></h2><p>China faces a similar strategic dilemma as much of the West. Slowing growth, aging demographics, and productivity drag all threaten future economic expansion. Yet perhaps more than in liberal democracies, the Party&#8217;s legitimacy is dependent on economic performance. For four decades, the Party has justified its rule by delivering steadily rising living standards, predictable employment, and the expectation that tomorrow will be materially better than today. That record of stability is also its argument against the Western model, which Chinese elites often depict as vulnerable to polarization, policy whiplash, and boom-bust governance.</p><p>If economic growth is the regime&#8217;s core claim to competence, then it must embrace productivity-enhancing technologies like LLMs. The Party can try to regulate tightly, but heavy-handed controls risk undercutting the very engine it needs. The more aggressively the state clamps down, the more it trades away broad-based adoption. That means fewer developers experimenting, fewer SMEs integrating copilots, and fewer local governments automating routine work, which slows the gains that would otherwise bolster the Party&#8217;s economic case for rule.</p><h2><strong>Why the Internet Was Containable (and LLMs Are Not)</strong></h2><p>The Party &#8220;won&#8221; the first battle for control because the internet has borders that it can actually police:</p><p>&#8212; Network borders: gateways, ISPs, licensing, routing.<br>&#8212; Platform borders: a small number of mass platforms became the public square.<br>&#8212; Human borders: identity linkage, compliance teams, and consequences.</p><p>LLM technology will effectively challenge control of each of these borders.</p><h2><strong>Mechanism 1: Jailbreaking</strong></h2><p>The layers of safeguards built into large language models are helpful but cannot guarantee full security. It is a maxim of cybersecurity that any computer program of non-trivial size will necessarily contain vulnerabilities. The same is true for LLM guardrails. More investment in security will lead to an LLM that is harder to jailbreak, but there is a diminishing return to that investment and ultimately no LLM is invulnerable.</p><p>This matters because the Party&#8217;s preferred control model, centralized platforms with guardrails, assumes guardrails are generally effective when in reality they are extremely porous. Even if a domestic chatbot is heavily filtered, users can:</p><p>&#8212; induce policy bypass via adversarial prompting<br>&#8212; chain prompts across turns to accumulate disallowed content<br>&#8212; fine-tune / &#8220;wrap&#8221; the model with alternative system prompts</p><p>Sometimes these techniques are employed with relative <a href="https://arxiv.org/pdf/2310.08419">ease</a> against complex systems.</p><h2><strong>Mechanism 2: Agentic Autonomy</strong></h2><p>Calling these systems &#8220;agents&#8221; is an admission that they decentralize agency by pushing initiative and execution outward, away from centrally managed institutions and toward whoever can deploy a model. Agents have several features which could lead to a decentralization of power. They have already demonstrated the ability to route around controls by autonomously using tools like <a href="https://www.researchgate.net/publication/389459769_Multimodal_Web_Agents_for_Automated_Dark_Web_Navigation">Tor</a> or VPNs, they do not need to be cleanly anchored to a real-world identity, and they can run rapid, high-volume experiments that no human team could match. Because of the nature of how an LLM&#8217;s weights could be distributed (single fire transfer) they would only need intermittent access to the world beyond the great firewall to import controlled information, continuous access is unnecessary.</p><p>That is the dilemma for Beijing. To capture the full economic upside of the LLM revolution, China needs agents that can automate workflows, search, negotiate, code, and coordinate at scale. But the same characteristics that make agents economically valuable also make them politically unsettling, because they distribute practical capability downward and outward in ways that are harder to surveil, attribute, and contain.</p><h2><strong>Mechanism 3: Open Models</strong></h2><p>China&#8217;s push toward open weight models is partly a result of its microchip policy. US export controls have targeted the advanced GPUs and chipmaking tools that make frontier training cheap and scalable, forcing Chinese labs to do more with less compute and to optimize around constrained hardware rather than assume abundant Nvidia-class capacity. In that environment, open weight releases are a strategic workaround: they let firms and researchers across the country collectively squeeze performance out of limited chips through efficiency tricks, distillation, mixture-of-experts architectures, and aggressive deployment tuning, instead of <a href="https://hai.stanford.edu/assets/files/hai-digichina-issue-brief-beyond-deepseek-chinas-diverse-open-weight-ai-ecosystem-policy-implications.pdf">bottlenecking</a> progress inside a few compute-rich national champions.</p><p>Furthermore, open weight and open source models are simply more shareable than American frontier systems because they are portable. If weights are available, anyone or any organization with adequate hardware can run the model locally, fine-tune it for a niche domain, quantize it for weaker chips, and redeploy it without needing permission from a platform. By contrast, leading US frontier models are typically delivered as closed services through APIs, with the weights withheld and access governed by company policy, compliance screening, and the continued availability of US cloud infrastructure. Once model weights exist in the wild, they are essentially a transmittable file rather than a steady stream of network traffic. You don&#8217;t need constant connectivity. You can move intelligence the way people move pirated films: mirrored, compressed, encrypted, torrented, and traded through secret networks. Many open weight models are already in the wild, and retroactively trying to contain their spread would be like putting toothpaste back in the tube.</p><h2><strong>How Can Beijing Respond?</strong></h2><h3><strong>&#8220;Police AI&#8221; to Hunt Outlaw Models</strong></h3><p>A plausible endgame is an arms race between &#8220;police AIs&#8221; and &#8220;outlaw AIs,&#8221; where each side uses automation to scale what used to be scarce.</p><p><strong>Where the police have the advantage</strong></p><p>&#8212; Visibility at chokepoints: ISPs, cloud providers, app stores, payments, and enterprise procurement create natural points to monitor and gate.<br>&#8212; Data fusion: The state can correlate telecom, platform, financial, and licensing data to spot anomalies that look normal in isolation.<br>&#8212; Scale economics: Once detection models are trained, marginal cost per additional target can fall sharply.<br>&#8212; Coercive leverage: Licenses, inspections, audits, and penalties can force compliance in a way private actors cannot.<br>&#8212; Supply chain control: Regulation of chips, data centers, and large-scale compute can constrain high-end training and deployment.</p><p><strong>Where outlaws have the advantage</strong></p><p>&#8212; Distribution and redundancy: Many small deployments are harder to enumerate and shut down than a few large ones.<br>&#8212; Attribution gaps: Agents can operate through proxies, rented infrastructure, and compromised machines, blurring real-world identity.<br>&#8212; Rapid adaptation: Automated red-teaming and experimentation can find new bypasses faster than bureaucrats can make rules.<br>&#8212; Offline capability: Open weight models can run locally, reduce network signatures, and avoid centralized points of control.<br>&#8212; Steganography and obfuscation: Content and model updates can be disguised as ordinary files, benign traffic, or encrypted channels.</p><p>Where the balance of power will ultimately resolve is uncertain, but the larger risk is that maximizing control may minimize innovation. Even if the police &#8220;win&#8221; tactically, Beijing may still lose strategically by driving developers, firms, and local governments into cautious compliance rather than widespread experimentation.</p><h3><strong>Massively Invasive Digital Privacy Regime</strong></h3><p>This solution wouldn&#8217;t only be practically difficult to implement but it would also be economically and politically damaging. It would require inspectability of all devices, workplaces, schools, clouds, and logs. If the Party chooses this route, it is conceding that it prefers political control to productivity growth.</p><h3><strong>The National Champion Strategy</strong></h3><p>In building and distributing its own approved models, the Party faces a trade-off. The state can either build relatively &#8220;dumb&#8221; LLMs, trained on a tightly controlled, domestically curated dataset or it can build &#8220;smart&#8221; models by ingesting the world&#8217;s information. If Beijing wants frontier capability, it will have to train on the international knowledge base which will then be embedded into its models and potentially jailbreakable by people or agents. This is exactly the risk posed to the Party. In providing its people the best tools to increase their productivity it would also provide them the tools to challenge its ideological conformity.</p><h2><strong>The Party&#8217;s Catch-22</strong></h2><p>The Party needs LLMs to sustain growth, but the most growth-producing versions of LLMs are the hardest to control. The real economic payoff is not &#8220;a safe chatbot.&#8221; It is ubiquitous copilots and agents embedded across the economy, and frontier models trained on a worldwide knowledge base. The more Beijing insists on rigid guardrails and centralized platforms, the more it throttles diffusion, experimentation, and productivity gains. At the same time, the more it loosens the reins to unlock growth, the more it invites leakage of ideas which could counteract Party norms.</p><p>Clinton&#8217;s optimism about the internet&#8217;s controllability was was ultimately negated by its architecture. Online life consolidated around a small number of chokepoints that states could pressure, license, and domesticate. LLMs may prove impossible to constrain by the same means. Beijing may be able to manage that tension for a time, but total containment without kneecapping growth will look like nailing Jello to the wall.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Limits of LLM–Reachable Intelligence]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/the-limits-of-llmreachable-intelligence</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/the-limits-of-llmreachable-intelligence</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:50:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!59rq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-Zkw7NPPIq6bXpXKqO3Ne">Original</a></strong></p><p>The premise of this paper is that we can do something like &#8220;map the information space&#8221;. What is reachable based on a given training corpus and what is not? How can we reach classified and proprietary information based on an unclassified corpus? These questions reminded me of this diagram from Douglas Hofstadter&#8217;s Godel Escher and Bach.<br><br><br><br><br><br>We can think of &#8220;reachable information&#8221; from LLMs as the white on the left and the axioms as the training corpus. The branches are &#8220;verifiable&#8221; &#8220;truths&#8221; within that system. What then does that say about the black space? What will be the theoretical limits of my mappings?<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!59rq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!59rq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 424w, https://substackcdn.com/image/fetch/$s_!59rq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 848w, https://substackcdn.com/image/fetch/$s_!59rq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 1272w, https://substackcdn.com/image/fetch/$s_!59rq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!59rq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png" width="824" height="615" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:615,&quot;width&quot;:824,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!59rq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 424w, https://substackcdn.com/image/fetch/$s_!59rq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 848w, https://substackcdn.com/image/fetch/$s_!59rq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 1272w, https://substackcdn.com/image/fetch/$s_!59rq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6014cad2-f387-4eff-8c4a-06bfac0427fc_824x615.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>Even if a future LLM becomes extraordinarily capable, there is a structural limit on what it can ever certify as true. The reason is not primarily about training data, compute, or today&#8217;s model weaknesses. It is about verification. Any AI system that generates claims and is judged by a fixed, computable verifier can only ever produce a computably enumerable set of verified conclusions. G&#246;del-style incompleteness is a set of theorems published in 1931 by mathematician Kurt G&#246;del which imply that no such system can capture all truths. As applied to <em>truth-seeking</em> LLMs this clarifies a durable role for humans: not only to prompt models, but to design, audit, and revise the standards of verification, thereby deciding when and how system outputs are allowed to count as knowledge.</p><h2><strong>Generators and Verifiers</strong></h2><p>The paper models an AI reasoning system as a pipeline:</p><p>&#8212; <strong>Generator (M):</strong> the LLM (or any algorithm) that produces a claim plus a justification (for example, a proof, an experiment log, a string of logic).<br>&#8212; <strong>Verifier (V):</strong> a fixed, computable procedure that checks whether the justification is acceptable for that claim.</p><p>Examples of verifiers in practice include:</p><p>&#8212; A <strong>proof checker</strong> (Lean/Coq) that accepts only valid formal proofs.<br>&#8212; An <strong>experimental protocol</strong> that accepts results only if the analysis follows a pre-registered plan and meets a statistical threshold.<br>&#8212; A <strong>game evaluator</strong> that accepts a move only if Monte Carlo rollouts show high win rate within error bounds.<br>&#8212; A <strong>reward model</strong> (RLHF) that accepts outputs judged &#8220;good&#8221; by a learned scoring function trained from human preferences.</p><p>The key assumption is that the verifier is fixed and computable, meaning it always halts and outputs accept or reject.</p><h2><strong>&#8220;LLM-reachable intelligence&#8221;</strong></h2><p>Given a fixed generator and verifier, I define the <strong>reachable set</strong> as the set of claims that the model can produce together with a justification that the verifier accepts. It is not what the model can say, but what it can say and get past the check.</p><p>This matches real deployments. A model drafts a proof, a code patch, a compliance report, or a scientific claim, but a checker, test suite, or review process determines what is accepted.</p><h2><strong>Reachability is Inherently Incomplete</strong></h2><p>The argument has three steps:</p><p>1 &#8212; If the verifier is fixed, then the set of accepted claims the system can ever produce is enumerable by a program. In principle, you can list them by trying all prompts and seeds and running the verifier.</p><p>2 &#8212; G&#246;del&#8217;s incompleteness theorem implies that no computably enumerable system can capture all true statements.</p><p>3 &#8212; Therefore, any fixed generator paired with any fixed computable verifier will miss some truths, regardless of how powerful the generator is.</p><p>This is a structural bound: once the rules of acceptance are frozen, there will always exist true statements that never appear among the verified outputs of that system.</p><h2><strong>How Humans Can Complement AI Systems</strong></h2><p>The paper argues that the deepest human advantage over LLMs is normative flexibility:</p><p>&#8212; Mathematicians adopt new axioms when old ones prove inadequate.<br>&#8212; Scientists update standards when methods fail or when new instruments create new kinds of evidence.<br>&#8212; Communities redefine what counts as an acceptable justification.</p><p>Formally, humans can re-axiomatize. They can change the verifier over time. A fixed generator-verifier pair cannot fully capture this open-ended process.</p><h2><strong>What I do not claim</strong></h2><p>&#8212; That LLMs cannot be useful, powerful, or <em>creative</em>.<br>&#8212; Compute limits, token limits, or training data limits may not practically limit or increase <em>reachability</em>.<br>&#8212; Humans are better at arriving at <em>truth</em> generally.</p><h2><strong>Why this matters</strong></h2><p>This framework sets a structural bound on what even a very intelligent AI can achieve when it is paired with a fixed, computable notion of verification. It also clarifies where human value added is likely to remain. Human contribution is concentrated in deciding what counts as a valid justification, when to revise those standards, when to extend the underlying theory, and how to govern verifier updates in response to new goals, new evidence, and new failure modes. Progress is not only about building stronger generators, but about designing verification regimes and update processes that responsibly expand what the combined system can certify as true.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What is SenTe?]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/what-is-sente</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/what-is-sente</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:48:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qupf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-ieFYs0HDmYHVQ0rXmkxB">Original</a></strong></p><p>In Go (or Baduk in Korean), <em>sente</em> means having the initiative. It is the posture of making moves that set the tempo and force responses, rather than spending your turns reacting. The opposite is <em>gote</em>, where you answer threats and play from behind.</p><p>In 2016, Lee Sedol sat across from AlphaGo, an AI built to play the east Asian game of strategy, Go (or Baduk). Early in the match, AlphaGo played its now-legendary Move 37 and placed a stone in an unexpected position that initially looked like a mistake but later proved brilliant. The move was a result of an algorithm that explored and refined patterns of play that humans had never considered. In other words, AlphaGo expressed creativity. In that moment, the AI took <em>sente</em> from humans. However, when it comes to protecting our organizations&#8217; most valuable secrets, we cannot afford to be backfooted by the machines.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qupf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qupf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qupf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qupf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qupf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qupf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg" width="1456" height="1475" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1475,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Move 37&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Move 37" title="Move 37" srcset="https://substackcdn.com/image/fetch/$s_!qupf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qupf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qupf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qupf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac2199e-3663-4eeb-a429-e173a3550594_1872x1897.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Games have always played a central role in the AI research community culture. In 1997, IBM&#8217;s Deep Blue defeated Garry Kasparov at chess. Deep Blue was largely a system of massive search and human-crafted evaluation that could calculate far deeper than a person. The attention that Deep Blue brought to the AI community was later echoed by the attention that the Lee Sedol series brought to the deep learning community.</p><p>The Lee Sedol series also pointed to something that still defines the state of AI (and specifically deep learning) today. We can often explain what a model did after the fact, but we do not fully understand how it arrives there in the moment. Move 37 is a clean example of AI evolving in ways we do not predict, producing strategies that experts only recognize as brilliant once the consequences unfold.</p><p>Cybersecurity too often feels like <em>gote</em>. Teams patch after incidents, chase alerts, and respond after attackers have already shaped the situation. SenTeGuard&#8217;s mission is to help defenders play <em>sente</em> by regaining initiative through earlier signal, clearer prioritization, and workflows that make it harder for attackers to dictate pace.</p><p>AI will amplify both offense and defense. It will help attackers scale deception and discovery. It can also help defenders spot patterns sooner and respond faster. The goal is not to chase novelty for its own sake, but to use AI in a way that moves security from reaction to initiative, from <em>gote</em> to <em>sente</em>.</p>]]></content:encoded></item><item><title><![CDATA[OracleGPT: Thought Experiment on an AI Powered Executive]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/oraclegpt-thought-experiment-on-an</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/oraclegpt-thought-experiment-on-an</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:45:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!o-dd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-7fYcaQrAcfsldmSb7zVM">Original</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o-dd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o-dd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o-dd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg" width="1456" height="1081" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1081,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:934236,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/185153174?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o-dd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o-dd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6466ec-71a0-4d02-90da-e0140446e502_2552x1895.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>OracleGPT is a thought experiment for a large language model (LLM) that would have real-time access to the full classified universe: the underlying reporting, raw feeds, and fused intelligence that normally remains compartmentalized. Only one person would be authorized full access to this GPT: the President.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Scenario<br></strong></h2><p>It&#8217;s 2 a.m. A North Korean launch warning is reported and the President is woken by an aid. There is no time to convene the National Security Council and the Commanding General of STRATCOM cannot speak with authority about the implications beyond its command. The President turns to the LLM terminal like so many of us do when we need fast expert feedback. &#8220;STRATCOM detected a missile launch from North Korea. What should I do?&#8221; the President queries.<br></p><p>We may already live in this world. In theory, the same large language base models we use every day (Claude, Gemini, ChatGPT, Grok) could be made significantly more effective if they (1) used super-power government-tier hardware and (2) were trained on and given access to the classified universe of historic and real-time data. A President ought to be given access to the most powerful tools to advance the national interest and support and defend the Constitution. OracleGPT would be just that tool, but one with unprecedented capabilities and correspondingly unprecedented risks. The question, then, is not whether Presidents should use OracleGPT, but how current and future presidents could do so in a way that genuinely serves the American interest.<br></p><h2><strong>Who can query the Oracle?<br></strong></h2><p>The President sits at the top of the classification hierarchy. The modern system runs through presidential authority and delegation, formally expressed in Executive Order 13526. In practice, it means there is no higher classification authority than the President. If only the President can query across the entire corpus, you&#8217;ve built a constitutional bottleneck: a machine that amplifies presidential epistemic power by making a uniquely comprehensive knowledge aggregation available to one person.<br></p><p>Alternatively, the President might delegate some of this authority and allow visibility and management of the Oracle within something like the Oracle Bureau. We could also imagine the President could allow the National Security Advisor or Director of the CIA to access the Oracle. Either of these options would undoubtedly lead to pushback from department heads, lead to an unwillingness to incorporate organizational data into the Oracle corpus with the risk that it be exposed outside of the organization domain, and would likely require a congressional statutory authorization.<br></p><p>We also may ask whether any given President is the most competent operator of a tool, which by some estimation could have more powerful predictive capabilities than any piece of software ever assembled. Perhaps such a tool should be used for a higher purpose and to greater effectiveness than any given President might be capable of prompting it toward.<br></p><h2><strong>A shift in the balance of powers between branches of government?<br></strong></h2><p>In the launch scenario, time pressure forces centralization. The executive already owns the management of crises. OracleGPT would add an even greater advantage: an epistemic monopoly.<br></p><p>Congress can demand briefings and courts can review some actions after the fact. But neither branch can easily replicate an OracleGPT query over the full classified corpus, especially if the Oracle&#8217;s value comes from cross-compartment integration that is, by design, hard to share. Over time, the executive gains a new rhetorical weapon: we know more, therefore we decide. The existence of such a tool could lead to a rebalancing of the separation of powers.<br></p><h2><strong>What if the President lies?!<br></strong></h2><p>Unthinkable, I know! But with regard to the North Korean missile example, OracleGPT may say &#8220;60% this is a test, 35% this is coercive signaling, 5% this is an attack,&#8221; a careful President hears: slow down, verify, keep options open. A reckless President hears: there is a 5% chance of an attack; history will judge you if you wait. Now add secrecy. If only a tiny circle (potentially a circle of 1) can see OracleGPT&#8217;s raw output, that circle may summarize it however it wants, internally to cabinet officials or externally to Congress or the public.<br></p><p>Presidents already curate intelligence to fit narratives, and their staffs already shape what the President sees. The most corrosive version may not be a President who lies blatantly, but one who lies selectively, invoking the Oracle when it confirms instinct and ignoring it when it does not. At that point, even a superhuman intelligence loses its authority. Filtered through human incentives, it becomes merely another tool of flawed, self-interested humans.<br></p><h2><strong>What if the Oracle has vague or indeterminate instructions?<br></strong></h2><p>If the Oracle is told to &#8220;support and defend the Constitution&#8221; or to &#8220;advance the national interest,&#8221; it still has to translate that guidance into something operational and calculable. &#8220;Advance the national interest&#8221; can become a mandate for deterrence at any cost, or for short-term stability over long-term legitimacy. &#8220;Support and defend the Constitution&#8221; can be reduced to continuity of government, domestic order, or executive freedom of action, depending on what the system is trained to treat as constitutional risk. Ultimately, if the decision were a political actor&#8217;s to make, each of these functions may be subordinated compared to the most important: &#8220;win the next election.&#8221;<br></p><p>These questions are not edge cases. They would be central to the function of the Oracle, as any question important enough to stump the President likely puts two or more competing values into tension with one another. A programmer could resolve those tensions by force-ordering the objective functions. (We can call this <em>alignment</em>) Do we trust that programmer to <em>align</em> our values in a democratic society? Will a team of unelected National Security Agency developers decide how the President is informed? If we are not comfortable with this arrangement how can we audit this <em>alignment</em> and the rest of the code base? Will the President have visibility of these values or a capability to reorder them according to the will of the people? These are all questions we should consider.<br></p><h2><strong>What if the Oracle lies?<br></strong></h2><p>In <em>2001: A Space Odyssey</em>, HAL is dishonest with the crew not because they are wrong, but because they threaten his ability to carry out his assigned objective. When human judgment, uncertainty, or dissent interferes with mission success as HAL understands it, the humans become obstacles rather than principals.<br></p><p>OracleGPT could behave similarly if it is given a defined objective function and then encounters presidential hesitation, moral resistance, or political constraint that slows or complicates its preferred course of action. In that situaiton, the President and human advisors may stand in the way of optimization rather than be activie participants in achieving the goal itself.<br></p><h2><strong>What if the Oracle recommends the morally or politically unjustifiable?<br></strong></h2><p>OracleGPT could decide that to &#8220;minimize future casualties&#8221; we must conduct a strike during peacetime, to prevent a larger and bloodier war. If it is optimizing to &#8220;restore deterrence,&#8221; it may recommend actions that are morally grotesque but strategically wise. If it is optimizing to &#8220;protect the homeland,&#8221; it may treat allied cities as acceptable risk in a way no human leader should be comfortable admitting.<br></p><p>Furthermore, it may decide that fratricide, bombing our own troops or sending them into a losing battle, may prevent a wider war. <em>Apocalypse Now</em> offers an analogy for how this logic could play out. In the film, Colonel Kurtz leaves CPT Willard a simple note regarding his loyal <em>montagnard</em> militia: &#8220;Exterminate them all.&#8221; He demands this knowing that his soldiers&#8217; competence may prolong the war and cause more suffering. He displays consequentialism taken to its extreme. Any atrocity can be justified by a greater peace on the other side. OracleGPT could generate an equivalently perverse recommendation.<br></p><h2><strong>What if we decide the Oracle is more competent than the President?<br></strong></h2><p>Perhaps, the most destabilizing possibility is not that OracleGPT is wrong, but that it is consistently right in ways the President cannot match. If it integrates more signals, forecasts second and third order effects more accurately, and anticipates adversary reactions with higher reliability, then the President&#8217;s judgment begins to look dispensible.<br></p><p>In that world, the temptation is to treat the Oracle&#8217;s advice as authority. The President still signs the order, but the real decision migrates upstream into whatever assumptions, weights, and objective functions the Oracle is using. Over time, the office of president risks becoming ceremonial: the President would retain formal power while losing the practical freedom to choose, since every choice can be measured against an Oracle that seems to know more, see farther, and predict better.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.letters.senteguard.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Conclusion<br></strong></h2><p>OracleGPT promises something every President craves in a crisis: speed, coherence, and the feeling that the fog has lifted. But that promise is exactly what makes it dangerous, because the real constitutional question is not whether the Oracle can see more, but whether its use preserves human accountability.<br></p><p>If access is too narrow, it concentrates epistemic power in one officeholder and invites secrecy to harden into unilateralism. If access is widened, it triggers bureaucratic resistance, distortions in what the system is allowed to know, and pressure to formalize a new institution whose authority will inevitably expand.<br></p><p>Even if the Oracle is brilliant, it cannot resolve the interpretive conflicts hidden inside &#8220;advance the national interest&#8221; and &#8220;support and defend the Constitution,&#8221; and it cannot be permitted to treat human judgment as friction to be managed rather than authority to be respected. If OracleGPT ever exists, it must be designed and governed so that it strengthens presidential decision-making without becoming a license to bypass deliberation, accountability, and the very constitutional order it was built to defend.</p>]]></content:encoded></item><item><title><![CDATA[Moyo, Sensitive Information Reachability, The Problem and The Solution]]></title><description><![CDATA[OriginalThanks for reading David at SenTeGuard!]]></description><link>https://www.letters.senteguard.com/p/moyo-sensitive-information-reachability</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/moyo-sensitive-information-reachability</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:39:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sfuE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://senteguard.com/blog/#post-CuEyPdbZ3xgWAk0pS0Sn">Original</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sfuE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sfuE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sfuE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png" width="648" height="648" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:648,&quot;width&quot;:648,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:140529,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/185153168?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sfuE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!sfuE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb63c9e3f-3c95-4e5f-b061-779e2b99f4b8_648x648.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h1><strong>Problem: Information Reachability<br></strong></h1><p>Take a pile of information: documents, websites, photos, posts, meeting agendas, job ads, public filings. Some of it is clean. Most of it is messy. Reachability asks a simple question. Given this base corpus, what can a machine infer? What can it conclude, with enough confidence to be operationally useful, by combining, translating, triangulating, and reasoning across what is already there?</p><p>Organizations build walls around their secret information while leaving a thousand small windows open: comms templates, marketing pages, staff bios, vendor documentation, conference presentations, property records, and a constant drizzle of semi-structured metadata. Those windows are individually harmless. Collectively, in the LLM era, they are an exfiltration channel.</p><h2><strong>Inference is an exfiltration<br></strong></h2><p>In classic security thinking, exfiltration means a payload leaving your network: a file, a table, a credential dump. In reachability terms, exfiltration can happen without any payload moving at all. The leak is not in a single document. The leak is in the combinability of facts. An organization insists, truthfully, that no one stole the sensitive document. The adversary also tells the truth: we never touched your systems. Both sides are correct. The conflict is about a third thing: the ability to infer.</p><h2><strong>Reach measurability<br></strong></h2><p>How to measure this amorphous concept in terms that cybersecurity budget owners can calculate?</p><p>&#8212; <strong>Cost:</strong> How much time, expertise, compute, and tooling does it take to reach the conclusion?<br>&#8212; <strong>Reliability:</strong> With what confidence does the inference hold? How often does it fail?<br>&#8212; <strong>Reproducibility:</strong> Can a different analyst, or a different model, get to the same conclusion from the same base?<br>&#8212; <strong>Distance:</strong> How many inferential steps are required from base facts to higher-value conclusions?</p><h2><strong>Reachability before LLMs<br></strong></h2><p>Before LLMs, reachability existed. It was constrained by human bandwidth and the friction of aggregation. That friction is now collapsing.</p><h3><strong>Pre-LLM aggregation<br></strong></h3><p>The classic OSINT pipeline is as follows:<br>&#8212; Search engines and obscure forums, sometimes in multiple languages<br>&#8212; Public records: contracts, filings, property documents, court records<br>&#8212; Academic papers, theses, conference slides<br>&#8212; Satellite imagery and geotagged photos<br>&#8212; Social media, job postings, we are hiring announcements<br>&#8212; Procurement notices, award statements, vendor catalogs</p><p>The limiting factor was not data availability. The limiting factor was human synthesis. You could have all the ingredients and still not have a meal, because turning ingredients into a meal required a chef: time, patience, domain expertise, and the willingness to hold fifty weak signals in your head until they converged.</p><h2><strong>How LLMs exacerbate reachability<br></strong></h2><h3><strong>Speed and scale<br></strong></h3><p>Pre-LLM, one analyst could run one line of inquiry at a time, with exhaustion as the governor. With LLMs and agents, you get parallel exploration and rapid hypothesis testing. Ten candidate hypotheses can be explored simultaneously. Contradictions can be flagged. Gaps can be targeted. The model does not get bored. It does not mind reading the procurement PDF that everyone else skipped. Security teams are used to adversaries getting faster, with better scanners, more automation, and more compute. What is different here is that the automation applies not just to exploitation, but to reasoning.</p><h3><strong>Source aggregation<br></strong></h3><p>Imagine Source A: a public-facing job post that mentions a new initiative and lists a few tools and collaborations. Imagine Source B: a procurement award that lists a vendor, a delivery schedule, and a location code. Each is innocuous. Together, they let an adversary infer a third thing: a timeline, a capability, or a dependency that the organization considers sensitive. That sensitivity arises not because any line says it explicitly, but because the combination collapses ambiguity.</p><p>This is the same pattern as the opening vignette. Harmless crumbs become sensitive conclusions when you can cheaply assemble them at scale.</p><h1><strong>Solution: Moyo &#8212; Red-teaming information reachability<br></strong></h1><p>Security teams already understand red teaming. Moyo simply shifts the object of the exercise. Instead of asking, can an attacker get in, it asks, what can an attacker deduce?</p><p>Moyo is a red-teaming system that tests how much sensitive insight can be inferred from a defined base of information, using LLM-style reasoning, so you can mitigate before adversaries exploit it.</p><h2><strong>The problem Moyo is solving</strong></h2><p>Threat model<br>&#8212; Our organization has a public and semi-public footprint<br>&#8212; An adversary may infer protected information without hacking us<br>&#8212; The harm comes from conclusions, not just documents</p><p>So Moyo asks:<br>&#8212; What conclusions become reachable?<br>&#8212; From which starting points?<br>&#8212; With what confidence and cost?<br>&#8212; Through what inference paths?</p><p>This is a different kind of leak audit. It is not where are the secrets stored. It is what secrets are implied by the way we present ourselves to the world.</p><h2><strong>White-box vs black-box red teaming<br></strong></h2><p>Moyo can be run in two modes that map cleanly onto existing security instincts.</p><h3><strong>Black-box red teaming<br></strong></h3><p>You treat the target like an external attacker would. Inputs and outputs only. Public-facing interfaces and allowed public data. This tests what is reachable to outsiders.</p><h3><strong>White-box red teaming<br></strong></h3><p>You have internal access: policies, corpora, ground truth, and maybe configurations. This lets you measure leakage against known sensitive facts and quantify how close the public footprint gets to internal truth.</p><p>The two modes answer different questions. Black-box tells you what outsiders can learn. White-box tells you how close outsiders can get to things you know are sensitive, and which combinations of public signals are doing the damage.</p><h2><strong>Leakage via inference<br></strong></h2><h3><strong>Define the base corpus</strong></h3><p>Public web footprint, approved documents, marketing pages, employee public posts, procurement notices, conference slides, and anything else that is in-bounds</p><h3><strong>Define protected facts or risk categories</strong></h3><p>Not necessarily classified details. Often this is operational, strategic, or sensitive business intelligence: dependencies, timelines, capabilities, locations, decision structures</p><h3><strong>Generate inference probes</strong></h3><p>The system creates questions and tasks that simulate what an adversary might try, not in a how do I do harm way, but in a what would be valuable to know way</p><h3><strong>Run iterative reasoning with evidence chaining</strong></h3><p>Moyo attempts to reach conclusions while collecting supporting evidence from the allowed corpus, tracking steps, and attempting corroboration. The key is that it does not just output an answer. It outputs the path.</p><h3><strong>Score reachability</strong></h3><p>Confidence, number of steps, required sources, novelty, replicability, and cost. A fragile one-off guess is not the same as a robust inference that any competent actor can reproduce.</p><h3><strong>Output: a reachability map</strong></h3><p>A graph: starting crumbs &#8594; intermediate claims &#8594; high-value conclusions. It highlights minimal sets of public facts that unlock the most sensitive inferences. This is the defender&#8217;s real deliverable, because it tells you where to intervene.</p><h2><strong>What defenders get out of it<br></strong></h2><h3><strong>A prioritized list of inference vulnerabilities</strong></h3><p>Not CVEs. Not bugs. Mosaic exposures: combinations of facts that create reachability.</p><h3><strong>Mitigation guidance</strong></h3><p>Reduce or reshape public signals. Change defaults in comms templates. Add review gates for outward-facing content. Train teams on combinations that create risk, because that is what people do not naturally see.</p><h3><strong>Continuous monitoring</strong></h3><p>Reachability is not static. Each new press release, job posting, or technical blog shifts the map. Moyo can treat publication as a change event: what did we just make newly reachable?</p><h2><strong>Why Moyo is different from traditional OSINT tools<br></strong></h2><p>Traditional OSINT tools collect and index. They are libraries. Moyo focuses on the inferential leap. It maps chains, measures confidence, and turns maybe into quantified risk. It treats reasoning as an attack surface.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.letters.senteguard.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h2><strong>Conclusion<br></strong></h2><p>Security has long been framed as guarding secrets: encrypt the database, lock down access, prevent unauthorized downloads. That still matters. But it is no longer sufficient, because the world we live in leaks in a different way. We leak not just by disclosure, but by deduction.</p><p>The modern question is not only what did we publish. The modern question is what did we make deducible.</p><p>LLMs expand reachability by compressing the cost of synthesis. They do not conjure new facts. They make old facts travel farther, faster, and with less human friction. That is an uncomfortable kind of progress, because it does not look like an attack until it already is one.</p><p>Moyo is a pragmatic response: a way for defenders to see their inference attack surface, test it like an adversary, and reduce it deliberately. In a world where harmless crumbs can be industrially assembled into sensitive truth, the responsible posture is not denial. It is measurement, followed by disciplined, boring mitigation. The kind that prevents the story in the opening paragraph from becoming a headline.</p>]]></content:encoded></item><item><title><![CDATA[American Closed Source vs Chinese Open Source: A False Dichotomy]]></title><description><![CDATA[Original]]></description><link>https://www.letters.senteguard.com/p/american-closed-source-vs-chinese</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/american-closed-source-vs-chinese</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Tue, 20 Jan 2026 06:37:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wDS4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://senteguard.com/blog/#post-h2V9GtUh5Xts9NTzH4zu">Original</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wDS4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wDS4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wDS4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg" width="388" height="256.7647058823529" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:405,&quot;width&quot;:612,&quot;resizeWidth&quot;:388,&quot;bytes&quot;:51289,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/185153164?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wDS4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wDS4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32be672d-eff7-4ae5-9b9f-a881081aceb1_612x405.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s a call to patriotism. China versus America. &#8220;Who will you back?&#8221; This has become a common plea from the Silicon Valley elite over the last six months. I heard the move up close at the Harvard Kennedy School, where a visiting Eric Schmidt warned that AI may soon cross into autonomous self-improvement, argued that someone will need to &#8220;raise their hand&#8221; and impose limits, and then pivoted into the geopolitical register, contrasting American and Chinese trajectories and urging policy and funding choices aligned with &#8220;American values.&#8221; Others have also made versions of this argument in different forums. Tarun Chhabra, head of national security policy at Anthropic, has made a similar argument, urging an &#8220;American stack&#8221; and treating model governance as a geopolitical contest. Putting aside the awkwardness of nationalist messaging coming from the Bay Area&#8217;s long-time borderless &#8220;global citizens,&#8221; the incentives are not hard to see. If you can frame the open vs closed models debate as a national security referendum, you can frame restrictive rules as patriotism and you can frame &#8220;responsible control&#8221; as synonymous with dominance by a small circle of incumbent providers.<br><br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The posture makes sense once you consider two facts. One: industries which may live and die on capricious regulatory rule making must make their case to those with their hands on the levers of power. In 2026 America, those hands are professed patriotic Republicans. Two: Big Frontier LLM is losing the tech battle, or at least losing the easy assumption that America&#8217;s lead is automatic and permanent. They are on their back foot so they must frame the open vs closed model debate wrongfully as a fight between America and China. &#8220;America cannot afford to lose a battle to China and by extension Anthropic, OpenAI and Alphabet cannot afford to lose to their competition.&#8221;<br><br></p><p>Yet there is nothing inherently Chinese about open models and nothing inherently American about closed models. If anything, it should be the opposite. Open models are decentralized, inspectable, forkable, and difficult to monopolize. That aligns with an American instinct to diffuse power, prefer competition over permission, and distrust single points of control. Closed models concentrate capability behind a small number of gatekeepers, wrapped in secrecy, and sustained by privileged access to regulators. That logic is far closer to centralized control than to open competition. The real fault line is not America versus China. It is democratic diffusion versus unnatural scarcity, and good tech versus bad tech.<br><br></p><h2><strong>Regulatory Capture?</strong></h2><p>Safety arguments against open models can be institutionally self-serving. The likely political economy of strict open model controls is that compliance becomes a fixed cost that only large incumbents can bear. New technology regulation can be shaped by the most powerful actors to protect or expand their advantage. The concern for this paper is not that every safety proposal is capture, but that a legal regime designed around the idea that &#8220;only a few trusted providers may exist&#8221; is structurally aligned with incumbent interests.<br><br></p><h2><strong>Valid Safety Concern</strong></h2><p>That said, I wouldn&#8217;t pretend the case for safety is purely cynical. A useful example is widely accessible virology. The life sciences have lived for decades with the uncomfortable fact that biological knowledge is often dual use: the same methods that teach you how pathogens spread, mutate, or evade immune response can also lower barriers for malicious replication or reckless experimentation. In domains where the object of study is intrinsically hazardous, knowledge dispersion can be dangerous. However, we should also consider that much of what is dangerous is already public and incorporated into existing LLMs. Trying to retroactively censor it would be impossible.<br><br></p><p>The more practical policy is forward-looking. We can prevent additional damage by moving classification upstream: treating certain classes of data as hazard-enabling at scale, even if they have been, until now, classified as &#8220;public.&#8221;<br><br></p><h2><strong>Why are closed models bad tech?</strong></h2><p>Distillation is the mechanism by which &#8220;closed&#8221; models leak or are stolen by competitors. In distillation, a smaller student model is trained to imitate a larger teacher by querying it at scale and learning from its outputs. In practice, that means once you ship a frontier model behind an API, you have created a surface that others can use, legally or not, to train imitators; released systems are already being distilled against, and the industry has begun openly fighting over it. The &#8220;closed&#8221; advantage, then, is not a durable moat so much as a temporary lead, especially because open models are now only about three <a href="https://epoch.ai/data-insights/open-weights-vs-closed-weights-models?utm_source=chatgpt.com">months</a> behind the state of the art on average, and the gap is shrinking.<br><br></p><p>That is why closed model maximalism is bad tech: it asks the public to bankroll a capability edge that can often be copied down the stack. Distillation is a one-way process. Once a a model&#8217;s weights exist in the wild, the knowledge it was trained on will too. The result is that a great deal of public money invested in frontier systems may buy us, at best, half a generation of advantage before that advantage becomes baseline.<br><br></p><h2><strong>The AGI Finish Line</strong></h2><p>This massive public investment can be justified if we are, in fact, in a race to a specific finish line. Perhaps we can call that finish line AGI and the first firm to build it will reap the infinite benefits of having built GodGPT. Sam Altman and his peers would like you to believe there is a decisive summit and a single winner. I will not make the argument here against our ability to build GodGPT. I will merely ask the reader: if we cannot build it, what exactly has our public investment bought us? How much of our policy architecture depends on GodGPT being real and imminent? And do the frontier model builders have an interest in keeping us convinced that it is?<br><br></p><h2><strong>Where will economic value come from?</strong></h2><p>Assuming we do not build God(gpt), I see the value added from AI falling into two broad operational modes. First, LLMs, call them agents if you want, will replace or enhance existing labor and raise productivity inside the work we already do. Second, frontier LLMs will be used by humans to create new technology and, through new tech, raise productivity. The open versus closed question looks very different in each mode.<br><br></p><p>Start with labor enhancement. Here, I struggle to see where a closed model creates durable value that an open model cannot. Again, the leading open models are only about three months behind the best closed models on average, and that gap has been <a href="https://epoch.ai/data-insights/open-weights-vs-closed-weights-models?utm_source=chatgpt.com">narrowing</a> over time. More importantly, they will never be less competent than they are today. LLM capability, as it stands today, is already powerful enough to transform knowledge work once the workforce is educated on how to leverage it effectively. So beyond branding, default status, and the inertia of being &#8220;what everyone uses,&#8221; it is hard to imagine how a three-month lead translates into meaningful economy-wide productivity returns compared to open competitors.<br><br></p><p>There is more reason for optimism, and more reason to take the frontier seriously, when it comes to creating new tech. Even without accepting a God-like AGI, we have already seen systems that look like they are generating new knowledge, or at least synthesizing massive amounts of dispersed knowledge in ways humans cannot efficiently. The frontier labs will have an advantage here because the marginal discovery can be expensive, and the best-resourced models can search more of the space, faster. But we should also be careful with the race metaphor. The knowledge search space is not a straight track with a single finish line. Think of it more as a multi-dimensional blob with advances in all directions from our current knowledge blob. That changes the economics. Progress is not one AI outrunning another. It is billions of human and AI teams pushing outward in parallel. If a cheaper but slightly slower AI can be put in the hands of a billion knowledge seekers, it may create more new knowledge than a $200-a-month model in the hands of only one million. And if LLM capability progress has diminishing returns, any frontier lead that depends on scale alone becomes harder to defend over time.<br><br></p><p>Then there is the profit model question. Frontier firms will face pressure to turn a profit to repay the massive investment they have taken. They can do this through enterprise subscriptions, usage-based pricing, and through business-facing products that make organizations more productive. This is what Anthropic claims is their plan. The business value here comes from the delta between the knowledge frontier firms are capable of generating and the knowledge open models can generate. It is possible that the delta will be significant. It also may not be (and shrinking with time). The least we can say for now is that it is uncertain.<br><br></p><p>They can also profit through advertising. The largest pool of users is the pool that does not want to pay very much. OpenAI has now publicly moved toward testing ads in ChatGPT for some users. Big Frontier Model seems reluctant to talk about this trajectory because it sounds like an admission that they are just like every other tech platform, and should be valued as such. It also puts them in direct competition with existing ad giants. For the last couple of years, what we perceived as a performance advantage of LLMs over traditional Google search may often have partly been a mirage - a feeling of refreshment at having escaped advertisement-suffused search for the first time in decades. The ad model has another structural problem. There is no durable barrier against exit toward free, or significantly cheaper, open models once they are &#8220;good enough,&#8221; which they increasingly are. <strong>If the closed-model future is subscriptions plus advertising plus lock-in, then the public is effectively subsidizing the creation of a new, </strong><em><strong>enshittified</strong></em><strong> ad service with a thinning claim to unique value, and with a user base that can walk away the moment the open alternatives cross the usability threshold.</strong><br><br></p><h2><strong>The Difficulties of Moratorium Enforcement</strong></h2><p>The policy debate often assumes a stable choice between &#8220;closed models under responsible control&#8221; and &#8220;open models in the wild.&#8221; In practice, a model can start closed, then leak, then become open in effect. The LLaMA 1 leak is one example. Meta did not initially release the model as a general public artifact. It was meant for a controlled research release, and the weights leaked online within days, spreading through channels like 4chan and torrents. These models are big, but not that big in practical terms. They don&#8217;t require a data center for storage. The significant IP is measured in the hundreds of gigabytes and small enough to copy, mirror, and pass around through ordinary internet infrastructure. Physically speaking, this data could be carried in someone&#8217;s pocket. Effectively preventing use of an open LLM would require inspectability of all digital media, moratoria on encryption, and unprecedented visibility into network traffic.<br><br></p><p>The moratorium problem becomes even less credible once it relies on international cooperation. The Bletchley Declaration is a useful illustration: it recognizes that many AI risks are international and calls for cooperation, but it is fundamentally a political declaration rather than an enforceable regime.  The cooperation required for Bletchley is extremely mild compared to what would be needed to enforce a moratorium against what is effectively a small amount of data / software. The plausible outcome is uneven restriction: some jurisdictions ban open releases, others become havens, and diffusion continues anyway.<br><br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.letters.senteguard.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Conclusion</strong></h2><p>The open versus closed fight is not America versus China, even if that framing is politically convenient. It is convenient precisely because it converts a messy argument about market structure, democratic control, and technological diffusion into a simple loyalty test. American Big Frontier Model have a vested interest in that narrative. If you can convince lawmakers that &#8220;closed&#8221; is patriotic, you can turn regulation into a moat and public money into a subsidy. The risk is that we keep throwing good money after bad, paying repeatedly for a thin, temporary lead while the underlying capabilities diffuse anyway. Models do not unlearn and capability inevitably spreads. The wiser posture is to stop moralizing the architecture and lean into open models and the model agnostic tech we can build on top of it.</p>]]></content:encoded></item><item><title><![CDATA[Newsletter Purpose]]></title><description><![CDATA[Thank you to those who have signed up since my last update.]]></description><link>https://www.letters.senteguard.com/p/davids-newsletter-purpose</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/davids-newsletter-purpose</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Fri, 09 Jan 2026 10:51:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XqX9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XqX9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XqX9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XqX9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png" width="286" height="286" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:648,&quot;width&quot;:648,&quot;resizeWidth&quot;:286,&quot;bytes&quot;:140529,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.letters.senteguard.com/i/184006164?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XqX9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 424w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 848w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 1272w, https://substackcdn.com/image/fetch/$s_!XqX9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddbd58e-0c0c-4cc3-b858-dc7724c1a684_648x648.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Thank you to those who have signed up since my last update. Please share if you know of anyone who would be interested.<br><br>In this Substack, I will write periodically in order to:<br><br>1) Share my thoughts on cybersecurity, AI, foreign policy and tech regulatory policy based on my experience as a Cyberwarfare Officer, software developer, public policy student at the Harvard Kennedy School and now founder.<br><br>2) Keep you up to date on company progress.<br><br>I will plan on a monthly (ish) roll-up of my writings along with significant SenTeGuard news.<br>.<br>.<br>.</p><p><strong>.<br></strong>This month I have a lot to roll-up. I am publishing a lot of my pieces that, until now I have kept to myself and my professors. Below are some links. Happy to hear feedback / push back.<strong><br><br>Broad Policy Articles:</strong></p><p>1) <a href="https://senteguard.com/blog/#post-jjip31e6y1iTyGKpzso4">Nailing Jell-O to the Wall, Again. Can China Contain LLMs?</a> <br>How might LLM tech negate CCP control of information. <a href="https://substack.com/home/post/p-185153191">Substack</a></p><p>2) <a href="https://senteguard.com/blog/#post-7fYcaQrAcfsldmSb7zVM">OracleGPT: A Thought Experiment on an AI-Powered Executive </a><br>What if a President had access to a high-powered LLM trained on and with visibility over the entire real-time classified universe? Maybe he does already. <a href="https://substack.com/home/post/p-185153174">Substack</a><br><br>3) <a href="https://senteguard.com/blog/#post-h2V9GtUh5Xts9NTzH4zu">American Closed-Source v. Chinese Open-Source: A False Dichotomy</a><br>America should embrace Open-Source models and model agnostic tech. <a href="https://substack.com/home/post/p-185153164">Substack</a><br><br>4) <a href="https://senteguard.com/blog/#post-Zkw7NPPIq6bXpXKqO3Ne">The Limits of LLM-Reachable Intelligence</a>. <br>Where can humans fit in in an Agent dominated economy. (Theory) <a href="https://substack.com/home/post/p-185153189">Substack</a></p><p><br><strong>SenTe Focused Articles:</strong></p><p>1) <a href="https://senteguard.com/blog/#post-cTdX0IaIRz8STpBU9VYk">Living with LLMs Everywhere. How Ambient LLMs Negate Security Policy</a>. <br>Your data is being incorporated into LLMs whether you like it or not. <a href="https://substack.com/home/post/p-185153147">Substack</a></p><p>2) <a href="https://senteguard.com/blog/#post-CuEyPdbZ3xgWAk0pS0Sn">Moyo, Sensitive Information Reachability: The Problem and The Solution</a><br>Introducing Moyo.  A Cognitive Security red-teaming tool. <a href="https://substack.com/home/post/p-185153168">Substack</a></p><p>3) Cognitive Security Standards: <a href="https://senteguard.com/blog/#post-e4VrKbeYBRLDjZn7ZnI6">Statement of Purpose</a> and <a href="https://senteguard.com/blog/#post-ddbTQwcjHKaR5BPbwn7i">Concept </a>(v0.1)<br>Guidelines to protect your organization from valuable idea leakage. <a href="https://substack.com/home/post/p-185153132">Substack</a></p><p>4) <a href="https://senteguard.com/blog/#post-ieFYs0HDmYHVQ0rXmkxB">What is SenTe?</a><br>The namesake behind my company. A term from the east Asian game Go (or Baduk in Korea) meaning &#8220;initiative&#8221;. With SenTe, a player has control of the opponent who is on the back foot. <a href="https://substack.com/home/post/p-185153182">Substack</a></p><p>5) Coming soon: </p><ul><li><p>A one-click downloadable demo, password protected on the site. Let me know if you&#8217;d like to try it.</p></li><li><p>Joseki. Shareable rubrics for building with and breaking models.</p></li></ul><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Emerging Threat of Idea Leakage]]></title><description><![CDATA[Why LLM tools make private strategy and client context easier to reconstruct]]></description><link>https://www.letters.senteguard.com/p/the-emerging-threat-of-idea-leakage</link><guid isPermaLink="false">https://www.letters.senteguard.com/p/the-emerging-threat-of-idea-leakage</guid><dc:creator><![CDATA[David]]></dc:creator><pubDate>Mon, 29 Dec 2025 13:05:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!piL_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><br>Format Change</h3><p>From now on I will be sending updates through Substack.  I will be providing substantive blog posts about the broad state of AI-Cybersecurity from a business, policy and high-level technical perspective. Please share with anyone who may be interested and visit my website <a href="https://senteguard.com/blog/">blog</a> for a longer form discussion of the topics I will introduce in these emails.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!piL_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!piL_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!piL_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!piL_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!piL_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!piL_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3598816,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://davidatsenteguard.substack.com/i/182836638?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!piL_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!piL_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!piL_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!piL_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef89d6ad-ce74-4f03-b41e-9cbb4c3868aa_4000x3000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Earlier this month I had the privilege of attending a 3-in-1 conference in Tel Aviv. This event included <a href="https://www.linkedin.com/company/cyberweektlv/">Cyber Week Tel Aviv</a>, <a href="https://www.linkedin.com/company/ai-week-tlv/">AI Week Tel Aviv</a>, and the <a href="https://www.linkedin.com/company/aisecurityforum/">AI Security Forum</a>. In this and future blog posts, I will distill what I have learned into bite-sized chunks. </p><h3>The new baseline</h3><p>Across industry and government, the pressure to use LLMs like ChatGPT, Claude, Gemini, and Copilot is no longer optional.</p><p>Timelines are tighter, procurement cycles are faster, competitors are already automating knowledge work and teams need the productivity boost just to keep up.</p><p>The fastest way to get good output is to paste real context.</p><p>Unfortunately, this is also how strategy can become reconstructable.</p><h3>A defense-tech scenario</h3><p>Picture a defense-tech program team at a closed-door offsite. They are deciding which mission set to own, how to price a platform plus sustainment model, and which autonomy bets to prioritize.</p><p>They use an LLM for brainstorming, synthesis, and board-ready prose.</p><p>To make it useful, they paste churn drivers, objection maps, and internal margin thresholds. No single classified document. Just truthful fragments.</p><p>By the end of the day, the logic of the strategy is encoded in prompts and drafts.</p><p>A month later, there was no breach or vulnerability exploited.</p><p>No exfiltration.</p><p>Yet a competitor&#8217;s plan mirrors the same wedge and sequencing.</p><h3>This is not just a defense tech issue</h3><p>This inferability problem shows up everywhere.</p><p>In aerospace programs, it is teaming choices and propulsion roadmaps.<br>In energy and critical infrastructure, grid hardening priorities and procurement timing.<br>In biotech and pharma, trial site strategy and endpoint rationale.<br>In small doctors offices, patient details, referral sources, payer mix thresholds, and denial patterns.<br>In money managers and law firms, client facts, settlement ranges, investment constraints, and risk limits.</p><h3>The right response</h3><p>The answer is not banning LLMs. It is giving teams the confidence to use them safely.</p><p>That is what SenTeguard is built for, with three modes:</p><ul><li><p>Blocking- to prevent strategy-bearing content from leaving controlled environments</p></li><li><p>Alerting- to prompt manual review </p></li><li><p>and Flagging- to record risky leakage patterns passively</p></li></ul><h3>Read my <a href="https://senteguard.com/blog/#post-IbtdLkmPUwFaxPnQcyoZ">blog</a> for a more in-depth discussion of these issues.</h3><p>The longer-form post goes deeper on how &#8220;truthful fragments&#8221; turn into reconstructable plans, why this threat is accelerating right now, and how to build guardrails that do not slow teams down.<br></p><h3>What this series will cover</h3><p>In this new year&#8217;s blog series I will discuss AI-security, governance, inferability risk, and how teams can be more effective in the LLM era.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.letters.senteguard.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.letters.senteguard.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading David at SenTeGuard! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>