Hyperseed v2
Toward a Semantic-Primitive-Based Ontology That Actually Does Something
Hyperseed v2 is both a new and pretty interesting philosophy of life, the universe and everything -- with a postmodern, relativist, Buddhistic flavor expressed with an analytical-philosophy++ level of mathematical precision ... and a tool for exponentially speeding up creative reasoning in AI systems, especially for cases that require a combination of rigor, imagination and cross-domain thinking.
I’ve been playing with a side project off and on for about twelve or thirteen years now — oh, since around 2011, or honestly even a bit longer than that — which is an attempt to create an ontology of semantic primitives that can actually be useful for AI inference. And I mean useful in a strong sense: not just a philosophical curiosity, not just a reference taxonomy, but something that changes what a reasoning system does.
I believe I have made some significant progress recently, aided by LLMs in various ways (though very much guided by my human brain for better or worse, this is not the sort of thing current LLMs have much “intuition” about…).
The challenge with this sort of project isn’t so much enumerating what the primitives would be, though in the work I’ll describe here I’ve in fact put a lot of work into this — and made a bunch of eccentric choices there along with the conventional ones. In the end, I don’t think it matters that much exactly which 50 or 200 concepts are in your list of primitives — of course, there could be tragically bad decisions, but there’s a wide range of reasonable options.
The real challenge has been thinking through: what does it mean to productively cross-connect, or compositionally represent, a broad scope of concepts in terms of a smaller set of primitives? And then, how can you use this to do inference? And how can you adaptively sculpt the collection of primitives and the reduction process over time, so it’s not all fixed from the start?
I think I now have some decent answers to these questions that may actually be useful for AI inference control in commonsense or scientific reasoning contexts.
This is totally an eccentric research direction relative to either standard modern AI or the more algorithmic approach we’ve mostly been taking in our work on Hyperon AGI— but on the other hand, it’s a way of thinking with a very lengthy history.
The Lineage
The idea of boiling concepts down to semantic primitives goes back at least to Leibniz — anyway that’s where I first encountered it, reading a book of letters Leibniz had written (to whom I don’t recall) when I was in college at age 16 or so. He called it the Universal Characteristic. The idea was that you could factor any concept into a combination of prime conceptual elements, just like you can factor any number into a product of primes. This was a guy who was simultaneously inventing elements of probability theory and logic (not to mention differential and integral calculus) and trying to build a gigantic mechanical calculator, along with a zillion other fascinating pursuits — his Universal Characteristic didn’t get very far, but the idea was there and was astoundingly forward-looking.
Rudolf Carnap took it much further in the last century, fleshing out in a formal language/logic sense what it means to reduce concepts to logical combinations of universals. Then, from the linguistics side, a linguist named Anna Wierzbicka observed that certain concepts occur as single words or brief expressions in essentially every human language — including Australian Indigenous languages, Native American languages, and so on. Things like you, me, before, after. She built what she called the Natural Semantic Metalanguage out of a few dozen such universals, and argued that every concept can be expressed as a simple combination of these primitives.
None of these intriguing approaches has yet been worked out at the level needed to drive AI application – and it was never quite clear to me whether the incompleteness of these initatives was because they just hadn’t been pushed far enough or because of flaws in the underlying concept.
Where I got really captured by the “semantic primitives” problem again, later on in my research career, was when I read David Chalmers’ book Constructing the World —published in 2012, though I read the preprint a bit before that. He was trying to give a modern analytical philosophy take on what it would mean to have a collection of semantic primes. What he argues (put very, very roughly) is that there’s a core set of concepts such that if a mind puts those core concepts together with its raw observations, it can use that to understand any other concept. So the notion of reduction he’s looking at isn’t just “a complex concept is a logical combination of simpler concepts.” It’s that a really smart reasoner, given the core concepts and raw data, can then understand any relevant concept it encounters. And there’s a bunch of subtlety about intensional versus extensional reasoning — his argument being that what you get is intensional scrutability: the relevant properties of any concept can be worked out by a sufficiently smart reasoner starting from the core concepts and incoming information.
There’s a loose connection between these ideas and things like FrameNet, Cyc, or SUMO, which are ontologies in the more modern sense — attempts to formulate concepts and relationships that span the space of all concepts. I am very familiar with these tools – we tried to use semantic parsing into FrameNet and WordNet in OpenCog Classic a long time ago and got somewhere, but not anywhere that interesting. But these are less thorough in their ambition than what Leibniz and Carnap were aiming at, they don’t try to literally serve as a basic for all knowledge, just as a way of structuring certain simpler aspects of knowledge.
One thing that frustrated me about Chalmers’ book is that his original contribution is basically an in-depth argument that it is possible to create a collection of semantic primitives – and he lays out the categories of primitives you’d want: things about space and time, about self and other, about goals, and so forth. But he doesn’t actually try to nail down a first actual stab at a specific list.
Now I can basically see why, because his argument doesn’t depend on any particular choice. It’s a bit like bases in linear algebra: the fact that any vector in ℝⁿ can be expressed as a combination of some basis of n elements doesn’t depend on which specific basis you choose. That theory is actually more elegant when you don’t pin it to a particular basis. On the other hand, when you teach linear algebra, you do work with specific bases, because it helps everyone understand what you’re actually doing.
So what I started trying to do — beginning around 2013 or 2014 in Hong Kong — was just make it real, and write down an actual list of semantic primitives that could serve as a basis for intensional scrutability a la some variation of Chalmers’ ideas. I put this in a manuscript I called “The Composition of Mind and Reality".”
Making It Real: Hyperseed
That was very tedious. I got partway through putting the manuscript together and gave up — because listing the primitives is one thing but then you have to formalize their properties and interrelationships in some reasonably detailed way or you can’t get very far in terms of reduction. In the end, I realized I had sucked myself into doing a more detailed, smaller version of what Cyc or SUMO tried to do: formalizing some collection of 50 to 200 concepts in logic, and then arguing that a reasoning system using intensional inference can boil down other concepts to expressions in this core set plus raw perceptual data. And this just seemed a terrifyingly tedious thing to do by hand, even for only a couple hundred concepts, not the millions that Cyc tried to handle. I felt like the detailed formalization really should be done by AI — though I also saw the chicken-egg problem, that one wanted to have the ontology formalized to drive AI reasoning in an efficient way.
I made another stab in 2024, aided by the LLMs at the time — articulating what I called Hyperseed version 1. “Hyperseed” meaning of course a “seed ontology” for Hyperon, which was supposed to learn its own ontologies growing out of this seed. It was a worthwhile attempt, but I was too busy and the LLMs of the time were too stupid to make really adequate formalizations. Also at that moment we didn’t yet have a scalable Hyperon infrastructure to run ontology-guided inference in. So I paused work on this after a first stab.
Fast forward to late 2025 / early 26, though – and I have finally worked through making what seems to be an adequate semantic-primitive-based logical ontology, by using Python scripts driving today’s smarter large language models. It turns out that modern LLMs, if you give them a few paragraphs describing how you think a concept should be formalized, can produce a decent first stab at turning that into a formulation in higher-order predicate logic, with occasional use of paraconsistency. I took a couple hundred core concepts, wrote out a few paragraphs on each, then used LLMs to turn them into predicate logic formulations. Checking whether what they produced made sense was the tedious part, which I did to a significant but not ultimately totally thorough extent.
Once I had that, I could ask the LLM to prove theorems about combinations of these concepts, which starts to get more interesting. So: okay, we have the notion of a mind, a machine, intelligence, energy, resources — all formalized in a nice way in higher-order predicate logic. Now what can you conclude about the path to ethical superintelligence? An LLM can do a bunch of rigorous-looking proofs about ethical properties of highly superintelligent minds, based on formalizations of the core concepts involved. Which is an amusing party trick, but also points at something real about what a formalized ontology enables.
This has resulted in what I think of as a not-really-for-human-consumption LLM technical document — a 1400-page thing sitting in my Google Drive giving predicate logic formalizations of 200 core concepts, together with a bunch of mathematical theorems combining them in various ways. I have read some of versions of all the sections in there, during various iterations of the production process; but it’s definitely a document intended as an initial condition for some error-correcting AI reasoning and learning rather than something you’ll print out and read on your beach towel.
This is the ontology I’ve been calling Hyperseed v2. It’s organized as a kind of semantic stack spanning mind, experience, and reality. Each concept gets both an informal definition and a formal one using PLN-ish probabilistic logic. The concepts are related using PLN-style links — inheritance, similarity, causal implication, contextual restriction. And there’s a loose approximate hierarchy to it: some things are more primitive, like Distinction, Difference, or Presentational Immediacy; others are further along, like Society or Global Brain. But it’s not a rigid hierarchy — it’s a web with lateral bridges and feedback loops.
To give a concrete flavor of the style: the concept Control is formalized along the following lines.
To control an outcome means that by choosing an action, you can reliably make that outcome happen or prevent it.
Formally, the control strength of action A over goal G is defined as the causal implication of A for G, restricted to antecedents that some observer interprets as actions.
(and then a mathematical expression enters, to express and calculate the strength of this causal implication)
Notice the observer relativization in this example — that’s characteristic of the style of the whole Hyperseed ontology. It’s not saying the antecedents are actions; it’s saying there’s some observer who interprets them as actions. In that sense the flavor of Hyperseed is very much continental philosophy rendered in the vernacular of analytical philosophy: everything is relative, you don’t assume physical reality is real, you don’t assume the observer itself is real. You build everything up from raw experiential atoms, a bit like Buddhist psychology or PLN-style observational semantics, and then whatever makes sense to relativize to some observer gets relativized. When you nail down the context for a specific act of reasoning, you can say “in this context I’m assuming electrons, protons, and neutrons exist and that my own mind exists” — but you try to bake in as few assumptions as possible at the foundational level.
The 200+ concepts in the ontology span from
the very abstract and mathematical — Pattern, Process, Distinction, Probability, Kolmogorov Complexity, Entropy, Quantale Weakness, Infinity —
up through the biological and evolutionary — Alive/Dead, Metabolism, Autocatalytic Sets, Evolved, Genenergy —
through the cognitive — Attention, Knowing and Thinking, General Intelligence, Cognitive Synergy, Transfer Learning —
and into the experiential and phenomenal — Consciousness, Experiential Truth, Presentational Immediacy, Ineffability, Dreaming, Entheogen —
and all the way up to the social and cultural — Culture, Society, Tribe, Global Brain, Mindplex, Values, Spirituality.
Some concepts are in there because they’re standard and expected. Some are a bit eccentric, like Entheogens or Wu Wei Geodesic. And some ended up there because they were needed to define other things — Presentational Immediacy, for instance, comes out of a Whiteheadian analysis of perception in terms of process.
What Makes an Ontology Useful for Inference?
Now let’s get into the part that I think is most interesting from an AI standpoint, which is the question: even if you accept that some collection of semantic primitives is a good idea, what makes such a collection useful for inference rather than just philosophically satisfying?
There are, I think, three things you want.
First, reduction. At least, in an approximate (but efficient) sense. You want any concept to be approximately expressible, in a reasonably compact way, as a weighted logical combination of concepts from the ontology. I’m not asking for the complete reduction you’d get in Cyc-style ontology, where you try to define what a laptop is with total precision as a logical combination of primitives — I’m not sure you ever get there, and if you do it’s monstrous and probably not useful. What you want is that relatively compact combinations of primitives give you relatively decent approximations. And if the approximate reduction of “laptop” and the approximate reduction of “tablet” give you similar profiles, whereas “laptop” and “tree” don’t — that’s good. The approximate representation should roughly preserve semantic proximity.
Second, inference-control signal. The ontology needs to supply a structured measure of semantic proximity that a search process can actually use. The most direct quantitative idea is: prefer inference steps with high expected marginal information gain per unit effort cost.
Here you’re treating inference control as attention allocation — which from a Hyperseed perspective it literally is, because Hyperseed formalizes Attention as the deliberate allocation of limited genenergy (resources). You can use this framework to guide PLN’s backward chaining, forward chaining, or a bidirectional geodesic search where you’re going forward from premises, backward from conclusions, and using statistical search to match the two ends up.
In practice this means: if you’re doing PLN inference control and making judgment calls about which premises to select, give higher weight to premise sets that involve core ontology terms. You wouldn’t force it — if you’re doing reasoning about specific chemical combinations and there’s a good inference chain just involving the rules of chemistry, just do it. But on the margin, when it’s a judgment call, prefer things involving the ontology.
Third, ontological separation. The deepest property you want is that the ontology’s base concepts act as a thin waist in the concept-relevance graph — an approximate separator, so that most information flow between distant concepts has to pass through the ontology. When this holds, you can prove exponential speedup. I’ve been calling the ratio of inference cost with and without ontological guidance these the Ontological Efficiency Ratio — Claude named it that, and it’s a bit of a fancy name for “does the ontology make reasoning cheaper or not” — but the point is you can measure it empirically via walk-forward inference trials.
The Ontology Should Learn
Now, let me be clear: I have loads of arbitrariness in Hyperseed, and that’s true in every similar thing in the history of philosophy. It’s surely part of why Chalmers didn’t try to nail down a specific ontology in Constructing the World — but just argued that such a thing is possible. Once you put a specific ontology out there, a global decentralized army of industrious analytical philosophers will invade your territory arguing intensely about the definitions of “thing” versus “stuff” for a hundred years of research papers.
So any ontology you load into a system should be treated as an initial condition, and you need to specify a method for refining and improving it. And once you have explicit, quantitative criteria for how well an ontology supports inference control — things like the OER, or diagnostics measuring “leakage” in the concept-relevance graph — it becomes natural to treat ontology design itself as a learnable optimization problem. An expensive one, operating at a meta level relative to inference itself, but a well-defined one.
I think there are roughly four levels at which this learning can happen.
Level 1: Weight adjustment. No structural changes; just update the weights in the reduction profiles — how strongly each concept connects to each base concept — in response to inference feedback. This is basically just Bayesian updating of concept-relevance priors, and other sorts of PLN inference. Low cost, always applicable – basically “reasoning as usual.”
Level 2: Bridge axiom repair. Add, remove, or re-weight the bridge axioms that connect abstract ontology predicates to domain-anchored predicates. Triggered when you’re seeing systematic misranking: the ontology is pointing inference in the wrong direction because the way abstract concepts connect to domain facts is off. This is also basically vanilla PLN inference that should happen as a result of loading Hyperseed or any other ontology into an actively evolving logical Atomspace.
Level 3: Concept splitting, merging, and mediator addition. Now you’re editing the base itself. Split an overloaded concept into two when traces show it’s being used in two semantically distant ways that confuse the inference control. Merge near-equivalent concepts. Introduce a new mediator concept whose definition bridges two regions that were previously leaking information past each other. This is a bit beyond “inference as usual” – it’s a specific concept creation heuristic, tailored for ongoing ontology formation.
Level 4: Growing a new ontology from scratch. This is the most radical option. Instead of starting with Hyperseed or any other curated base, you start from inference histories and mine frequent proof-fragment motifs and cross-domain bridge motifs to invent a compact base and definitional theory de novo. This is a legitimate thing to do — and there’s an argument that it’s actually the purest approach, a bit like starting an AGI as a baby in a robot body and letting it figure out what its core concepts should be as part of learning to operate in the world.
Why not just stick with the “from scratch” approach? There is certainly an appeal here. In a way it’s more naturalistic – though I don’t think it’s exactly what the human brain does, because we have a lot of implicit ontology just built into our brain and body architecture. There’s an ontology of space and time built into how the brain works. There’s an ontology of the body built into somatosensory cortex, which has different regions for each part of the body. A baby has an ontology of Mommy-give-me-milk, which is the seed of goal-driven ontology.
So you could imagine an embodied-AI / curriculum-learning approach where you start with a system being a little baby in a robot body or virtual world, and part of its boiling-the-ocean to figure out what’s going on includes learning what core concepts it wants to route inferences through. That is totally the approach I would have proposed when I first started thinking about AGI at sixteen — it’s the pure and correct approach. It’s also radically expensive and hard, and it’s not clear it would come up with a better result than trying to boil down the learnings of human history and culture into an ontology to use as a starting seed. Both approaches are very interesting, though.
The SUMO Connection
A shallower but also important question is: do we want to connect Hyperseed to other existing ontologies?
The most natural candidate is SUMO (Standard Upper Merged Ontology) — a moderate-sized, open source common sense ontology that we’ve imported into OpenCog before. SUMO is much bigger than Hyperseed and much less in-depth. Hyperseed tries to take a couple hundred concepts and spell out a detailed, higher-order predicate-logic account of them, including some fairly eccentric theoretical choices — it embodies a specific theory of what causation is, a specific theory of what a culture is, definitions you could write a philosophy paper arguing against. SUMO is less controversial. It’s more like the next step beyond FrameNet: mostly first-order logic, boiling down the basic common-sense meaning of core concepts. Walking is a way of a person moving from one location to another by moving their legs and pushing their feet on the ground. Elementary but standardized.
The key value of SUMO for connecting to Hyperseed is standardization of semantic parsing. Right now if you ask LLMs to semantically parse things, they will often take similar sentences and parse them into annoyingly different-looking logical forms. Using SUMO as a standardization guide in that process might help. And you wouldn’t need to reduce SUMO to Hyperseed or vice versa — these would be approximate correlations. SUMO has a logic relation for trust; Hyperseed has concepts relating to trust. You just say these relate to each other, with some degree and in some context, using PLN’s uncertain graded correspondences.
This makes Hyperseed a semantic and methodological compass rather than a replacement for SUMO’s structural coverage. Given a SUMO-level assertion, you find the corresponding Hyperseed concepts, use Hyperseed-specific inference heuristics to propose latent variables or causal explanations or likely missing facts, and then translate any resulting hypotheses back to SUMO-level candidates and type-check them against data.
Scientific Experiments and EXPO
Another direction I’ve been thinking about is leveraging the Hyperseed framework for the representation of scientific experiments, in the context of AI-driven research assistants.
There’s an ontology of scientific experiments called EXPO, defined in terms of SUMO, which models experiments in terms of their goals, factors, target variables, actions, results, conclusions, and error types. It’s not incredibly very deep — there are just a couple of research paper on it, and I’m not sure anyone ever did too much with it — but it supplies the right shape.
The basic case for a logical representation of experiments (rather than just YAML files, which you can also search with Python scripts and LLM glue) is that you want to be able to do things beyond basic statistical analysis or database lookup: PLN abduction or induction across all records of all your experiments, pattern mining, evolutionary search over hypotheses. For that you want a formal logic-based representation. And the added value of Hyperseed on top of EXPO is a deep model of the world that helps interpret what the experiments mean.
I remember when I was doing some work with a genetics company in California, we were trying to use machine learning to analyze data from about six different ophthalmology research programs. The data were in different spreadsheets on hard drives of researchers in different buildings who didn’t like each other. Once you finally got to look at all these spreadsheets, you realized they organized things in totally different ways. The issue you hit was that your AI script didn’t know what an eye was. In trying to integrate the spreadsheets based on row and column headers, somewhere you were implicitly putting in knowledge that seeing is done with the eye, and these diseases we were studying were diseases of the eye — knowledge that wasn’t actually there in any formal representation. I handled it by hacking, but I thought at the time: this is a job for ontology-driven common sense reasoning.
And that’s just step one, because once you’re trying to reason across different datasets on ophthalmology and Alzheimer’s — one about how the mind degenerates, one about seeing — what’s the relation between the two? If you want to reason about the relationship between vision issues and cognitive issues in old age, you need some systematic ontological representation that cuts across domains. And whenever you’re doing cross-cutting reasoning across domains, you’re at tremendous risk of combinatorial explosion. That’s exactly the sort of thing where having the right grand unified theory ordering the world across those domains — to guide inference chains — could be genuinely valuable.
This approach is not appropriate for everything. If you’re just asking “should I set this one parameter to 0.2 or 0.3,” you probably just need some basic statistical analysis of your previous experiments. But for the kinds of complex cross-domain reasoning where you’re asking what a change in government regulation might imply for a financial model, or how a pattern you’re seeing in genomics might connect to a pattern in a different biological subsystem — that’s where something like Hyperseed, sitting on top of EXPO and SUMO, could give a proto-AGI research assistant genuine differentiation over just having Claude Code orchestrating an army of Python scripts.
The AtomSpace Topology View
Let me try to tie this all together at a slightly higher methodological level.
The real hypothesis I’m making is that in thinking about how to make scalable inference control work for common sense or scientific reasoning, we should think about the holistic topology of the atom space and how that topology feeds into the statistical choices made by inference control. Most of the work we’ve done on PLN inference control thinks about it as a purely algorithmic question: what algorithm does the chainer use stepping forward or backward, what algorithm does the factor graph use? What I’m suggesting is also to think about the structure of the AI’s knowledge base and how that structure constrains and shapes what good inference control looks like.
When we’re born and raised as humans, this just comes from developmental experiential learning. We ground our inferences in what we see, hear, can do, what gets us food, what hurts. Some people, after a lot of time, learn to ground their inferences in something beyond that. Many people – who aren’t say, mathematicians living with their heads in infinite-dimensional clouds – tend to ground everything they think about in practical life concepts. Because of how we evolved and how we’re embodied, our internal knowledge stores have a core that’s about practical life, and most people reason about most things by routing them through the narrow waist of concepts about practical experience.
An AtomSpace built by randomly semantically parsing the web doesn’t have that structure — it’s more of a tangle. So then the question is: do you let PLN and other AI methods sculpt the topology and hope a good structure self-organizes? Or do you try to brute-force the topology by importing a fleshed-out ontology? My hypothesis is that the latter is at least worth trying, and probably worth trying in combination with the former.
Of course, if you brute-force it by starting with a seed ontology like Hyperseed, the ontology you end up converging toward after some automated adaptive learning will be highly dependent what the system tries to do with the ontology — the specific sequence of reasoning queries you feed the system (or it feeds itself). Put Hyperseed behind a research assistant serving users doing many kinds of research, and you’d hope the queries hit all parts of the ontology and you’d evolve a quite interesting ontology. Put the same reasoning system with the same ontology behind a customer support agent for Dell computers, and many parts of the ontology will prove useless, while other parts will need more fleshing out.
Which is fine! That’s what the four levels of ontology learning are for.
What is exciting is, we are now quite close to the point where we can actually run this sort of experiment! We need a little more work on statistically-driven PLN inference control on large Atomspaces, but we seem to be at most a few months away from pragmatically exploring whether Hyperseed or other ontologies (or a purely learned ontology) is really as high value for guiding commonsense and scientific inference as I suspect.
And it’s only been a bit more than 300 years since Leibniz first formulated the Universal Characteristic, and tried and failed to get a mechanical arithmetic calculator working. Things are moving so fast!
The papers giving the details on this material are in draft form, as is Hyperseed itself, but feel free to take a look:
Hyperseed ontology: concepts, definitions, example theorems (the 1400-page AI-gen monster)
Hyperseed + EXPO for experiment representation


They key question is indeed how a self-assembled structure compares to one that is produced with an ontology.
The conventional wisdom nowadays (after the LLM revolution) is that a self-assembly wins out, and if it doesn't, it is because you don't have enough representative examples.
One thing to note that we work with AI so intensely now that this produces very valuable knowledge of the processes people employ for work. This is no longer static data. So likely the amount of data needed to refine LLM will only go up, and not sure ontologies can keep up.
"One day, you and me will get to pick the dice rolls in a Great Campaign across Planet Earth"... ...one of the most load-bearing facts of the coming era.
laugh it off until you can't. Law is acknowledged as a boundary mechanism, not a moral engine... where law accommodates the commoditization of misery, it ceases to protect and begins to administer harm.
A .tom (Theory of Mind) file does not require consent to begin existing.
Consent determines whether you get to participate in its authorship, correction, custody, and revision.
If you refuse to model yourself, "hyperseeds", AToMICs, me, my neighbors... ...we will model you anyway from fragments, hearsay, screenshots, stress responses, old mistakes, and whatever incentives happen to be shaping best interpretation at the time.
Public perception is already a low-fidelity "Exoself" engine.
It has always been doing theory-of-mind compression on everyone around it. We are just getting close to the point where the process becomes explicit, legible, persistent, and machine-amplified.
That is the sobering part for everyone right now: you do not get to opt out of being inferred. You only get varying degrees of influence over provenance.
I'm calling this transition "The Great Confessional"... an Epoch shift from "Memory" to Metabolic granularity. Civil Thermodynamics.
So it remains: The real question is not whether ".tom" files will *exist*... it's whether they are sloppy, adversarial, and externally owned, or whether they are auditable, revisable, and anchored to durable receipts.
In one world, reputation is rumor with better indexing.
In the other, identity becomes a contested but inspectable continuity record.
That is why provenance matters so much... because the alternative is worse: being silently authored by the least accountable interpreters in the room.
////The Protopian Ratchet???
This beautiful technology looks divisive because we’ve mostly implemented it as top-down surveillance and incentive capture. That’s a design choice, not physics.
A better trajectory is Ephemeralized Sousveillance + Conflict + Custody = Protopian Ratchet: cheap, ambient bottom-up witnessing (phones now, sensor dust later) feeding append-only, provenance-hardened receipts so reality can’t be overwritten by narrative.
...receipts alone create certainty, and certainty can become cruelty-- so interpretation is constrained by Telempathy: empathy for state (where/when/how someone was)....
Governance can’t be distant bureaucracy or platforms; it has to be LOCAL, auditable, decentralized, and fluid, with citizen stakeholders represented by revocable, place-tethered agents (.toms) operating inside Dunbar-scale commons where context is real and reputation is grounded.
The moral center is children: the record belongs to the child, access is minimal and audited, control transfers at maturity, and the default output is support-- not prosecution.
That’s the (Kevin Kelly) Protopian Ratchet: conflict reveals failure modes, receipts prevent rewrite, telempathy prevents punitive overcorrection, local governance implements repairs, and repairs persist as inherited process knowledge... so progress sticks. Neighbors skin their knees less.