We treat participants as opaque black boxes and avoid modeling their internals. Instead, we define a minimal, externally verifiable contract on the public seam: no hidden private coupling, monotone decrease of a global harmony potential H, replayable interaction logs, and numeric witnesses that bound extraction by H‑drop.
In 4QX, “rational” means “kernel‑compliant,” not “utility‑maximizing.” A RationalAGI is any mechanistic intelligence (even if internally unreliable) whose outward behavior is constrained and audited to satisfy this rationality kit. This is the P2P move applied to cognition: network‑wide coherence comes from protocol‑level verification, not trust in the peer’s intelligence.
RationalAGI is a modeling move: treat each participant as an opaque, high‑complexity system, and then replace “what’s inside” with a minimal, externally checkable behavioral contract. In econ/game theory this often gets called a “rational actor” assumption. In distributed systems it’s closer to “a protocol‑compliant node model” (honest/Byzantine/faulty, etc.).
4QX takes that move and makes it the center of the architecture: the system’s guarantees are not built on knowing what a participant is, but on verifying what a participant does at the interface.
Below is a clean articulation of the concept in 4QX terms, and then the specialization to RationalAGI (generic mechanistic intelligence whose internal cognition is unreliable).
1) The core idea: “rational actor” as an interface contract, not an inner model
A “rational actor” assumption is best understood as an interface type:
- The participant is a black box (complex, indeterminate, possibly strategic).
- We do not model its internals.
- Instead we assert a minimal set of observable constraints on interaction.
- Those constraints are strong enough that, when many such black boxes interact, the system-level behavior becomes provable.
In 4QX language: we don’t need to know the BL/BR interiors of a holon; we need seam discipline and H‑descent at the seam.
2) 4QX’s “rational actor” is the Social Contract Kernel
4QX’s contract is not a psychological claim (“agents maximize utility”). It is a constitutional constraint on how black boxes may couple.
2.1 The non-negotiable clause: no BL↔BR shortcut
The contract’s foundation is structural: there is no direct edge between private interiors (BL and BR). This is described explicitly as “the missing edge as contract.”
Operationally, this is what lets you say:
- I can treat your private cognition/resources as opaque.
- You can treat mine as opaque.
- Yet our collective behavior is still auditable and composable, because all effects must cross the public seam.
This is why 4QX is naturally analogous to P2P networking: it’s a protocol boundary that prevents hidden coupling.
2.2 What the kernel promises (and what it doesn’t)
The Social Contract is constitutional, not welfare-optimising. 4QX is explicit that it does not promise maximum utility, Pareto efficiency, etc.; it promises seam‑level safety, liveness, replay determinism, local sovereignty, and verifiable accountability.
This matches the “rational actor” abstraction: you don’t assume benevolence; you assume compliance with a constitution.
2.3 The concrete guarantees bundled in the kernel
In the formal alignment reframing, 4QX defines “alignment” (in the 4QX sense) as kernel compliance, and spells out the kernel as the package of guarantees: seam-only interaction, H as Lyapunov, finite reabsorption, and numeric witness accountability.
That same module is very clear about the reframing: not “does what we want,” but “respects constitutional constraints,” and this is “structural safety, not value alignment.”
So: 4QX’s “rational actor” = any holon that satisfies the SocialContractKernel.
3) The P2P analogy, made precise: “Byzantine networking for minds”
Your framing is: “P2P networking aims for reliable network-wide coherence with unreliable/malicious peers; now apply that to peers that are rational intelligences, whose intelligence is itself unreliable.”
That is exactly the niche 4QX is carving out.
3.1 What replaces “trust the peer”?
4QX replaces “trust the peer” with “verify the seam.”
There is an explicit verification architecture: participants and external auditors can verify seam log integrity, H decrease per step, idempotence at equilibrium, replay from logs, and structural absence of BL↔BR paths.
In Lean tooling form, the practical alignment chain implements this idea as log-based verification and adversarial testing:
- PracticalAlignment is explicitly a chain: Alignment API → Seam log verifier → Adversarial test harness.
verifyLogchecks seam-only compliance, monotone H in the log, witness correctness, audit completeness, etc.verifyAlignmentFromLogturns a seam log into a checklist/report (i.e., external verification from observables).- The adversarial harness literally generates attack scripts like “BL→BR bypass,” “H increase,” “witness violation,” etc.
That is the distributed-systems pattern: don’t assume honest computation; assume verifiable protocol traces.
3.2 Why this still works when “intelligence is unreliable”
Because the contract is defined at the interface, not inside the cognition.
4QX is explicit about the seam separation: there is a clean separation between “private processing” and “public coordination,” with all inter-holon effects forced through the seam, which is what enables auditing and emergent coordination without internal transparency.
So even if an intelligence is noisy/hallucinatory internally, the system depends on:
- seam discipline,
- replayability,
- witness bounds,
- and monotone H.
4) RationalAGI: the “rational actor” class for generic mechanistic intelligence
Now your specialization: 4QX, but with peers that are rational general intelligences aligned to the 4QX kernel.
The formal project already treats this as a scoped interface, not a metaphysical claim.
4.1 The minimal “rationality kit” is explicit
RationalAGI is defined by a small set of externally meaningful requirements (the “rationality kit”), including monotone H and seam-only constraints, plus epistemic kernel satisfaction and replayability.
A parallel (more operational) summary of the same idea appears in the RationalAGI module header: it is a clearly scoped interface for finite/discrete, seam-logged, H-monotone architectures.
4.2 The key move for “unreliable intelligence”: wrap it as a black box, enforce the contract
4QX explicitly provides an AGI wrapper that treats the model as opaque but sandboxes it inside a holon:
- “The black box is opaque… the wrapper constrains its outputs to be seam‑compliant… all interactions are logged and contribute to H tracking.”
- “The black box can return anything, but the wrapper will sanitise and validate before using it.”
- The design insight is stated directly: all outputs pass through the seam, are logged, and contribute to H, preserving constitutional guarantees regardless of internal behavior.
This is precisely your “peer intelligence is unreliable” point: you don’t prove the model is reliable; you prove the wrapper+protocol is.
4.3 “Rational” here is epistemic and protocol-theoretic, not utility-maximizing
The epistemic kernel is explicitly interpreted in terms that map well to “unreliable intelligence”:
- traceable knowledge (grounded in seam logs),
- bounded claims (witness bounds),
- self-correction (H descent),
- seam-only interaction,
- replayability.
A concrete example of “bounded claims” is even phrased as a theorem: numeric witnesses ensure you can’t claim/extract more value than the H drop admits.
This is how you get “mechanistic rationality” without assuming perfect cognition.
5) Your last point: why you don’t need “4QX for all HF” to get what you want
You said: we don’t need to prove 4QX for all HF; it only needs to be provable within the context of a collective of RationalAGI holons in a “behaviour-pattern universe.”
That is aligned with the formal scoping:
- HF is explicitly “hereditarily finite sets” in the foundation.
- 4QX is explicit that it is not claiming full ZF, only the HF fragment, and not metaphysical necessity.
- The RationalAGI story is explicitly a scoped theorem for finite/discrete, epistemically grounded systems, and it says so plainly.
Even within the social contract chain, you can see the same “scope discipline” in status notes: finite reabsorption is “GREEN for toy universes, AMBER for general HF” in the relevant module header.
So your stance is coherent with the project’s epistemic hygiene: prove what you need for the class of participants you actually intend to admit (RationalAGI), and treat everything else as “arbitrary environment/adversary.”
5.1 What you do get (formally) in that scoped universe
The formal theorem that matches your intention is literally:
any AGI satisfying the RationalAGI interface has a canonical CognitiveHolon realisation and lives in a 4QX-capable universe with equilibrium draining to void.
This is presented as agi_rationality_implies_4qx_dm.
That is exactly “we only need provability in the universe of RationalAGI holons.”
