PageRank for Inference: Mapping Reachability in LLM Systems
In every major computing era, new capabilities create a new kind of complexity and there is always opportunity in figuring out how to visualize it and navigate it. In the late 1990s, the web was exploding in size, but it was hard to know what to trust or where to start until Google made the link graph not just measurable but navigable with PageRank. PageRank did not just score authority; it created a usable interface that turned chaos into confidence.
About a decade later, AWS was not just “renting servers.” It made infrastructure understandable and operable through standard building blocks, APIs, and monitoring, so teams could provision and manage systems deliberately instead of by guesswork. In each case, the winners were the ones who build the maps, metrics, and interfaces that turn a chaotic new substrate into something people can use with confidence. At SenTeGuard our mission is to make sense of the new LLM information environment.
Example
When a Fortune 500 company deploys an LLM across its knowledge base, what can it infer about merger plans from scattered financial reports, calendar patterns, and organizational changes? What trade secrets become visible when the model connects technical documents with supplier emails and hiring patterns?
No one knows and organizations that cannot answer these questions cannot safely deploy AI at scale. Without visibility into what becomes inferable, companies face a choice between artificial constraint or uncontrolled exposure. Organizations that ignore this problem will leak intellectual property through inference, face regulatory exposure from unexpected data combinations, and cede competitive advantage to those who can deploy AI systems with confidence rather than caution.
Reachability as a New Risk Surface
LLMs introduce a new kind of complexity. They take scattered fragments across a corpus and make them coherent, not just by retrieving what is already written, but by stitching together implications, filling in missing steps, and surfacing conclusions that were never explicitly stated.
This is reachability: what an LLM can conclude by connecting fragments across your data, even when those conclusions were never written down.
As models improve and their working context expands, the frontier of what can be reached from the same underlying material grows faster than intuition can track. Traditional security assumes data is either accessible or it is not. LLMs break that model. They make inference itself an exfiltration channel. Nothing needs to be stolen if the system can reconstruct sensitive conclusions from scattered signals.
The Missing Layer in the LLM Era
The LLM era needs the equivalent of what PageRank and AWS were for their breakthroughs: maps and metrics that make a chaotic information environment legible.
SenTeGuard’s thesis is that information reachability is not a temporary patch but inherent to LLMs as a platform. Models will not solve it on their own because reachability is structural. The default trajectory is expanded reachability, and the only question is whether you can see it happening and whether you can bound it intentionally.
Our response is an integrated platform with three layers (and counting) that work together to make the LLM information environment visible, controllable, and operational.
Moyo: The Mapping Layer
Moyo is the mapping layer. It is built to answer the hardest question in LLM security:
What becomes inferable when you combine these sources?
Moyo treats inference as an exfiltration channel and helps organizations model their information environment as a reachable space. It runs tests that probe what an LLM can infer from a base corpus and produces legible outputs that show where exposure is growing and where controls are working.
— When a company combines its hiring database with Slack archives, Moyo shows that the LLM can now infer which executives are likely to be terminated.
— When engineering docs meet customer support tickets, Moyo reveals what product vulnerabilities become visible.
Moyo creates the PageRank equivalent for inference risk: a usable interface that makes reachability navigable.
SenTeGuard: The Enforcement Layer
SenTeGuard is the enforcement layer. It sits where humans and systems actually touch LLMs—documents, prompts, workflows, and connectors—and reduces exposure at the point of use.
— When a developer pastes code into an LLM, SenTeGuard blocks the API key embedded in line 47 before it reaches the model.
— It helps organizations prevent sensitive data from entering unsafe contexts.
— It detects high-risk joins where separate domains get combined in ways that create new conclusions.
— It applies policy to real workflows rather than abstract rules.
If Moyo shows you where the boundary is, SenTeGuard enforces it.
Joseki:Wrapperhub — Integration and Orchestration
Joseki:Wrapperhub is the integration and orchestration layer that makes the messy middle legible.
In practice, LLM use does not happen in a single prompt box. It happens across wrappers, agents, connectors, routing logic, tool calls, retries, and a growing pile of glue code that quietly becomes your real product surface.
Joseki:Wrapperhub centralizes that surface.
It standardizes how models are invoked, how tools are exposed, and how context is assembled, so behavior is consistent enough to reason about and evolve. It also creates a single place where guardrails, logging, and evaluation hooks can live, turning “a bunch of LLM experiments” into an operational system you can instrument, compare, and improve over time.
From Experiments to Infrastructure
This field is new, and the problems change weekly because the platform changes weekly. Model capabilities rise. Retrieval improves. Tool use grows.
As models become embedded across regulated and high-stakes environments, the need for legible reachability maps and enforceable boundaries becomes foundational infrastructure. As LLMs move from experiments to infrastructure, organizations need the same confidence in their AI environment that AWS gave them for cloud resources.
That is what we are building.
Mission
SenTeGuard’s mission is to make the LLM information environment legible and governable. We build the maps and metrics that turn AI risk from vibes into engineering.
