In 4QX, AGI is a system whose observable behaviour over time satisfies a minimal rationality spec at the seam, and which we can re-express as a CognitiveHolon / AGIHolon living in a 4QX + MetaMonism universe.
You can think of this as “interface-level rationality”: a way to get reliable system-level behavior from participants whose internal complexity you do not (and often cannot) model.
The concept, stated cleanly
In complex-system modeling, the phrase “rational actor” often functions less like a psychological claim and more like a minimal contract: a deliberately small set of behavioral expectations that lets us treat each participant as a black box while still making stable predictions about interactions.
The key move is:
- Do not model the participant’s insides. Assume they are complex, idiosyncratic, and partly indeterminate.
- Do model a narrow set of externally legible constraints that most interactions will satisfy (or that you require participants to satisfy).
- Once those constraints are precisely stated, they become “solid ground”: you can design protocols, markets, or governance that have predictable properties even when the individuals are not predictable.
This is exactly the move 4QX makes—but instead of “rational actor” meaning “optimises a utility function,” 4QX pushes toward “rational participant” meaning “complies with a verifiable constitutional interface.”
How 4QX instantiates the idea
4QX replaces “assume rationality” with define-and-check rationality.
In the 4QX cognitive line, that “minimal contract” is made explicit as RationalAGI: “a minimal interface capturing the rationality kit for AGI,” intended to be understandable externally without requiring anyone to buy into the entire internal formalism.
Concretely, the docs summarise this “rationality kit” as five axioms (the minimal expectations you demand from a participant-AGI):
- H_monotone: H never increases across steps
- seam_only: all cross-boundary events are seam-only (no hidden BL↔BR bypass)
- kernel_init: initial state satisfies the epistemic kernel
- kernel_preserving: steps preserve kernel satisfaction
- replayable: state can be reconstructed from the seam log
This is explicitly framed as the “minimal rationality kit” for AGI.
So the analogy to “rational actor” becomes:
- Economics: “Treat agents as if they maximize consistent preferences.”
- 4QX / RationalAGI: “Treat agents as black boxes, but require that their observable interaction satisfies a small, checkable set of invariants.”
Why this gives you “solid ground” with black boxes
4QX’s “solid ground” is not “we understand the agent.” It is:
- A public interaction surface (the seam) that is structurally enforced, so effects are visible/auditable.
- A convergence discipline (H as a Lyapunov-style measure) so the long-run system behavior is constrained rather than runaway.
- Replayability and logging so behavior can be reconstructed, tested, and certified rather than trusted.
- Kernel compliance as “constitutional” alignment, explicitly not “the AI shares our values,” but “it cannot violate certain structural safety constraints.”
That last point matters: 4QX is very explicit that it’s aiming for a weaker but more achievable guarantee—structural safety rather than value-perfect alignment.
Why this is especially relevant for “RationalAGI” (generic mechanistic intelligence)
If your target is generic mechanistic intelligence, you generally cannot assume:
- interpretability,
- stable internal representations,
- shared objectives,
- or human-like rational deliberation.
So the classic “rational actor” move (“assume utility maximization”) becomes brittle.
RationalAGI instead says:
You can be any architecture (LLM, RL policy, hybrid planner, etc.), but to be a participant in a 4QX-governed collective you must satisfy the minimal interface.
That’s also why the framework includes a literal black-box wrapper pattern: the “black box is ‘sandboxed’ inside the 4QX holon,” with outputs forced through the seam, logged, and made to contribute to H—so the constitutional guarantees hold regardless of internal opacity.
The “design payoff” in one sentence
By turning “rational actor” from a vague assumption into a machine-checkable participation contract, you can build collectives of opaque, diverse agents and still get predictable properties at the system level—auditability, bounded side-effects, and convergence—because those properties are enforced at the seam, not hoped-for in the mind of the agent.
If you want a crisp formulation you can reuse
Here’s a concise articulation that stays faithful to the 4QX/RationalAGI framing:
A “rational actor” assumption is best understood as a minimal participant contract: we don’t model the agent’s internal complexity; we specify a small set of externally observable constraints that interactions must satisfy. 4QX operationalises this by defining rationality as kernel compliance—seam-only interaction, monotone H, and replayable audit logs—so black-box intelligences can participate safely and predictably without being interpretable or value-identical.
The 4QX RationalAGI interface
AGI = RationalAGI + EpistemicKernel + SocialContract, packaged as AGIHolon (+ optional RoleSpec).
At the behavioural level, “how we expect a rational actor to behave” is already captured as the RationalAGI interface in the SocialContract/Cognitive chain. It’s the “minimal rationality kit” with five axioms:
- H is non‑increasing along traces (
H_monotone) - All states satisfy the kernel (social contract invariants)
- All events are seam‑only
- Kernel is preserved by steps
- Behaviour is replayable (deterministic under log replay)
D1 then proves:
RationalAGI ≃ ETS(EpistemicTransitionSystem) under finiteness- Any
RationalAGIhas a canonical CognitiveHolon realisation
So: a “rational actor” in your sense is literally “an ETS that satisfies the RationalAGI axioms”, and therefore has a canonical 4QX holon embedding.
“Ontological wrapper” = CognitiveHolon + AGIHolon
At the ontological / wrapper level, the story from the defining‑AGI note is:
- A CognitiveHolon: self‑model, belief system, seam log, wired to the EpistemicKernel (traceable knowledge, bounded claims, self‑correction, seam‑only, replayable).
- An AGIHolon that wraps a black‑box model via
BlackBoxDelegate, calls it, validates answers, logs everything via the seam, and updates H.
That doc already proposes a candidate definition:
An AGI is a CognitiveHolon + BlackBoxDelegate wrapper (AGIHolon) that
• obeys the SocialContract kernel (seam‑only, Lyapunov H, numeric witnesses),
• satisfies the cognitive grounding conditions (self‑model, seam‑derived beliefs, H‑based self‑correction), and
• exposes enough capabilities to cover the usual AGI capability set (observe, plan, act, learn, coordinate).
Direction D then adds the theorem:
agi_rationality_implies_4qx_dm: anyRationalAGIis representable as a CognitiveHolon that satisfies EpistemicKernel, lives in a 4QX‑capable MetaMonism universe, and whose equilibrium behaviour drains to void.
So formally you already have:
Rational AGI ⇒ 4QX + Dialectical Monism + witness certificates
via AGIRunWitness from real AGIHolon runs.
That is the “ontological wrapper” story: if someone gives you any rational AGI (in the RationalAGI sense), then up to isomorphism it is a 4QX CognitiveHolon in a MetaMonism universe, and the AGIHolon/Oracle‑Multiplex stack is a concrete implementation of that wrapper.
