SenTeGuard Update - March 2026
Thank you to those who have signed up since my last update. Please share if you know of anyone who would be interested. If this newsletter is connected to a “.edu” you will lose soon, and you would like to continue receiving it, feel free to add another email.
On Request:
Prototype testing of the Moyo information space mapper. Find leaks of your controlled information (classified, proprietary, personal) in public LLMs.
SenTeGuard Pilot.
Broad Policy Articles:
Wrangling With Explosive Growth Substack
Article published in the Harvard Kennedy School Policy Review last month. I argue that while the pace of AI development can feel unprecedented and unsettling, periods of rapid, seemingly unconstrained technological growth are not new. How have we addressed unconstrained growth in the past and how can we do so now?
Software engineers have been able to incorporate LLMs into their workflows due to looser traditions of attribution (they copy and paste a lot). The article discusses how strict attribution standards in other fields have impeded growth and why loosening them may lead to faster growth of knowledge.
Large Language Models and Gaps in Meaning (Theory) Substack
I discuss some of the structural limitations LLMs face in representing ideas using human language.
SenTe Focused Articles:
Cognitive Security Standards (CSS). The topic of my Harvard Kennedy School culminating Policy Analysis Exercise. I am building a standards of best practices to prevent leakage of protected information (classified, proprietary, personal). Will publish fully in the coming months.
PageRank for Inference: Mapping Reachability in LLM Systems Substack
Google’s central thesis was to bring order to a disordered and chaotic network of internet links. SenTeGuard’s mission is to bring order to a chaotic space: what LLMs can know and how fast will they learn.
Coming Soon: Joseki. Shareable rubrics for building with and breaking models.
