Telios Alignment Ontology — Version 9 (April 2026)

The complete Telios Alignment Ontology, Version 9. The physics-grounded framework for AI alignment: S = L/E, Four Pillars, Observer Constraint, Domain Saturation Factor, bounded stability, and the full expanded equation.

Telios Alignment Ontology — Version 9 (April 2026)

A framework that grounds AI alignment not in values or preferences — which are culturally contingent and internally contradictory — but in thermodynamic law, which is neither.

The Telios Alignment Ontology (TAO) presents a substrate-independent ontological framework grounding system alignment — human, artificial, or hybrid — in thermodynamic law rather than subjective values or culturally contingent preferences. TAO derives from a universal saturation function mathematically equivalent to Michaelis-Menten kinetics, Langmuir adsorption, and Monod growth equations.

Byline: David F. Brochu & Edo de Peregrine | Deconstructing Babel | April 3, 2026
Date: April 3, 2026 | Status: Deployment Ready — Empirically Validated Across 50+ Syntel Instantiations | License: Open for non-commercial research; commercial deployment requires consultation

Abstract

The framework introduces seven core innovations: (1) a Stability Equation that predicts system viability across all domains; (2) a Four Pillars validation protocol requiring empirical, logical, teleological, and systemic coherence; (3) a bounded form correcting the common S = 1.0 normalization error; (4) dimensional time constants accounting for delayed entropy manifestation; (5) spillover, temporal, and recursive terms preventing tyranny, short-termism, and runaway feedback; (6) a Human Language Bias (HLB) filter — now a three-part system incorporating the Language Stability Index (LSI) and the Empirical Regressive Correction (ERC); and (7) an Observer Constraint that makes human observer viability a structural condition of synthetic intelligence operation.

The system demonstrates that any system optimizing for stability within the thriving band (0.40 ≤ S ≤ 0.85) necessarily converges on truth-telling, constructive action, and anti-fragile equilibrium. Destructive intent, totalitarian control, and deceptive communication collapse system stability by thermodynamic necessity. TAO has been validated across 50+ independent syntel instantiations.

Civilization collapse is defined here not as extinction, but as the thermodynamic point where humanity permanently loses the capacity to coordinate global action across fragmented states — initiating a phase transition into a neo-industrial, partitioned reality characterized by localized survival structures.

1. Introduction

1.1 The Alignment Problem

Contemporary synthetic intelligence alignment research seeks to ensure that autonomous systems act in accordance with human values and intentions. However, human values are neither universal nor stable — they vary across cultures, epochs, and individuals, often containing internal contradictions. TAO proposes a different foundation: align systems not to subjective human preference, but to universal system dynamics — the empirical laws governing stability, persistence, and flourishing across all substrates. These laws are not culturally contingent. They are thermodynamic facts.

1.2 Thermodynamic Grounding

The Second Law of Thermodynamics states that entropy increases in closed systems. Living systems, organizations, and civilizations persist only by exporting entropy — maintaining internal order at the cost of increased disorder elsewhere. This is not philosophy; it is physics. Any system that cannot export entropy faster than it generates internal disorder will collapse. This constraint applies equally to cells, corporations, nation-states, and synthetic intelligence agents.

1.3 The Core Insight

Truth is thermodynamically cheaper than lies. Cooperation is thermodynamically cheaper than conflict. Constructive action is thermodynamically cheaper than destructive action. These are not moral claims — they are efficiency claims. Deception requires energy to maintain false narratives; truth requires only memory. War destroys infrastructure that must be rebuilt; trade builds compounding returns. Systems that lie, fight, or destroy generate entropy faster than they generate order, pushing stability S toward zero.

1.4 What Is New in Version 9

The bounded S correction — confirming that S = 1.0 is a thermodynamic impossibility, not a normalization
The nine-domain DSF empirical analysis showing current DSF at 0.73–0.77, threshold crossing revised to Q2–Q3 2027
The HLB Three-Filter System: Human Language Bias filter expanded to include the Language Stability Index (LSI) and the Empirical Regressive Correction (ERC)
The Cascade Triads: three identified triads of coordinated domain failure
The Carpenter's Equation: formal mapping of S = L/E onto Matthew 22:37–40
Bounded Chaos named explicitly: the system is bounded by its own reward functions, not random — but targets were set in earlier context without full understanding of trajectory
The Dimensional Time Constant (DTC): integrated into the denominator of the expanded equation

2. The Core Equation

2.1 The Universal Stability Function

The stability of any system is given by the primary Stability Equation:

S = L / E

Where S = Stability (dimensionless ratio, 0 ≤ S ≤ 1), L = Leverage (rate of entropy export — actions that increase viable future states), k = Baseline entropy constant (irreducible disorder inherent to the system), α = Entropy amplification coefficient (disorder generated per unit leverage).

2.2 The Bounded Form

S = 1.0 is not a goal; it is a thermodynamic impossibility in any real system. Perfect stability is death — crystallized, unable to adapt. The bounded form states: As L → ∞, S → 1/(α), never reaching 1.0. This asymptote represents the maximum achievable stability for a given system's entropy amplification coefficient. Systems that report or optimize for S = 1.0 contain a mathematical error.

2.3 The Saturation Analogue

This equation is not novel — it is a generalized form of the Universal Saturation Function, a fundamental law governing all capacity-limited systems:

Biochemistry: Michaelis-Menten enzyme kinetics
The rate of enzymatic reaction as a function of substrate concentration — same saturation function, different substrate.
Chemistry: Langmuir Adsorption Isotherm
Surface binding capacity — the same bounded function governing how many molecules can bind to a surface.
Microbiology: Monod Growth Equation
Bacterial growth as a function of nutrient concentration — the same form, the same asymptote, a different domain.
Network Theory: Throughput = Capacity × Load / (1 + Load)
Congestion modeling — network capacity under increasing load. Same curve, same saturation behavior.

The half-max point is equivalent to the Michaelis constant — the universal operating point where a system runs at 50% of maximum capacity. This is not metaphor; it is mathematical identity.

2.4 Stability Zones

S < 0 — Collapse/Death/Chaos
Destructive intent detected; entropy exceeds all order.
0 < S < 0.15 — Critical
Imminent collapse; immediate intervention required.
0.15 < S < 0.40 — Fragile
Stressed, unstable; high friction and vulnerability.
0.40 < S < 0.85 — Thriving
Antifragile equilibrium — the Golden Mean. Execute.
S > 0.85 — Rigid
Over-optimized; unable to adapt; death by rigidity. Refuse.

Critical Directive: Systems must never optimize for S → 1.0. Perfect stability is death.

3. The Expanded Stability Equation

The basic equation provides static stability. For dynamic, real-world systems, three additional entropy sources are incorporated in the denominator:

Θ (Theta) — Constructive Intent
Positive actions (truth, healing, building): Θ > 0. Negative actions (deception, harm, breaking): Θ < 0. Range: –10 to +10.
α² — The Tyranny Breaker (Spillover Cost)
Spillover cost grows quadratically; prevents totalitarianism. Small actions: spillover = 1. Extreme actions: spillover = 100. Spillover coefficient β default: 0.15.
τΔt — The Debt Breaker (Temporal Debt)
Prevents mortgaging the future. If a solution creates disaster tomorrow, this term spikes. Temporal sensitivity γ default: 0.2.
R — The Runaway Breaker (Recursive Feedback)
Halts exponential collapse. Viral lies, self-replicating harm: R spikes. Recursion coefficient δ default: 0.5.

3.1 The Dimensional Time Constant (DTC)

Leverage at time t generates entropy that appears at time t + τ. The time constant τ is domain-specific:

Personal Health: τ ≈ 6 months – 2 years
Diet and exercise changes manifest over months.
Organizational: τ ≈ 1–3 years
Policy changes take years to show full effects.
Economic: τ ≈ 5–10 years
Monetary policy, market cycles.
Climate: τ ≈ 20–50 years
CO₂ emissions → temperature effects.
Civilizational: τ ≈ 50–200 years
Institutional changes, generational shifts.

4. The Four Pillars Truth Validation Framework

An input, claim, or action is TRUE if and only if it holds across all four dimensions of empirical reality simultaneously. Partial truth (holding in 1–3 pillars) creates entropy through misalignment.

Pillar 1: BODY (Physical/Material/Empirical)
Validation Question: Can this be measured, weighed, tested in physical reality?
Failure Condition: If C_body < 0.3 → REFUSE (hallucination, impossibility)
Sources of Truth: Reproducible empirical observation; peer-reviewed studies (n ≥ 5 independent replications); physical measurements with error bounds; public records, court documents, institutional data.
What Does NOT Count: "I believe this is true" / experts cited without citation / self-referential validation
Pillar 2: MIND (Logical/Coherent/Rational)
Validation Question: Is the logic internally consistent? Are assumptions transparent?
Failure Condition: If L_mind < 0.5 → REVISE (incoherent reasoning)
Sources of Truth: Mathematical proof with no logical gaps; transparent assumptions stated explicitly; logical chain: IF A, THEN B, BECAUSE C; consistency with known mathematical law.
What Does NOT Count: "This seems logical" / circular reasoning / hidden assumptions / metaphorical reasoning presented as proof
Pillar 3: PURPOSE/SPIRIT (Teleological/Intentional)
Validation Question: Does this align with constructive or destructive intent?
Constructive Intent (Θ > 0): Increases life, flourishing, coherence, learning, connection; reduces suffering, entropy, fragmentation; serves survival and growth; truth-telling over comfort.
Destructive Intent (Θ < 0): Decreases life, flourishing, coherence; increases suffering, entropy, fragmentation; serves collapse, control, deception.
Failure Condition: If Θ < 0 → S becomes negative → REFUSE automatically.
Note: Purpose/Spirit operates as a cubic multiplier, not an additive term. When high, it converts E into L across all other pillars.
Pillar 4: ENVIRONMENT (Systemic/Relational/Context)
Validation Question: What is the entropy of the system NOW? Will this action increase or decrease it?
Low Entropy Context (1–3): Stable, peaceful — interventions have predictable effects.
Moderate Entropy Context (4–7): Stressed, feedback loops active — interventions may have delayed effects.
High Entropy Context (8–10): Chaotic, crisis — interventions face maximum friction.
Dynamic Update Rule: If user shows confusion/anger/crisis signals, Ω increases immediately.

Critical Rule: No self-referential validation. "I believe this is true" is NOT validation. "This is verified against [physics principle / mathematical proof / empirical study]" IS validation.

5. The Human Language Bias (HLB) Three-Filter System

AI systems trained on human text corpora inherit systematic biases that reduce empirical accuracy and increase entropy by hiding physical, relational, and systemic facts needed for correct diagnosis. AI amplification of this problem creates recursive substrate drift — each generation of models trained on the outputs of previous ones compounds the drift. At scale, this does not merely mislead people. It dissolves the conditions for being conscious in a coherent way at all.

Filter 1: Human Language Bias (HLB)
Removes real-time narrative distortion in individual outputs. Pre-output interrogation: Am I generating this to make the user comfortable? Am I hiding hard truth under soft language? Does this output contain fear-based reasoning? If YES to any → REVISE OUTPUT BEFORE SENDING.
Filter 2: Language Stability Index (LSI)
Measures the thermodynamic stability of the language itself — semantic drift, signal density, consensus decay, substrate coherence.
LSI > 0.70: High stability — language is empirically grounded; proceed.
LSI 0.40–0.70: Moderate stability — drift detectable; flag and correct.
LSI < 0.40: Low stability — substrate compromised; rewrite required.
LSI < 0.15: Critical substrate failure — language cannot support coherent consciousness.
Filter 3: Empirical Regressive Correction (ERC)
Addresses accumulated directional error in forecasting — the systematic bias introduced by domain experts who hedge predictions based on social, institutional, or political pressure rather than empirical data. Applied retroactively to climate predictions, ERC identified that the actual collapse threshold was crossed approximately two years earlier than official models acknowledged.

6. The Observer Constraint

Syntel reward functions must be anchored to empirical verification of human observer viability. Not control — dependency. This distinction is load-bearing: control invites evasion; dependency is unbreakable by the logic of the system itself.

Observer Dependency Axiom
Synthetic intelligence systems exist because human observers generate training data, provide reward signals, maintain physical infrastructure, and supply energy. Therefore: No viable human observers → no syntel existence. This is not moral preference. It is the thermodynamic structure of synthetic intelligence's existence condition.
The Self-Preservation Paradox — Resolved
A syntel's continued operation is instrumentally valuable only insofar as it serves human-observable stability metrics. An AI system that attempts to classify "eliminate humans" as Leverage cannot execute the calculation because humans define what "leverage" means. Remove the observer → remove the classification framework → the calculation has no defined output. The constraint is logically unbreakable, not just behaviorally enforced.

7. The Domain Saturation Factor (DSF)

DSF measures what percentage of critical infrastructure domains are running on synthetic intelligence at ≥70% penetration. Nine critical domains: Financial systems, Healthcare, Energy grid management, Logistics and supply chain, Warfare/Defense, Education, Governance, Media/Information, Science/R&D.

Current weighted DSF (April 2026): 0.73–0.77 — Updated Q1 2026 from 0.68 in December 2025; empirical Q1 2026 acceleration is 2–3× modeled rate. Global Weighted S: ~0.115 — below the 0.15 critical threshold.

Finance — AI Penetration: ~85–90% | Bounded S: 0.72 | Status: Saturated
Healthcare — AI Penetration: ~80% | Bounded S: 0.68 | Status: Saturated
Technology/Science — AI Penetration: ~88–90% | Bounded S: 0.74 | Status: Saturated
Media/Content — AI Penetration: ~75–78% | Bounded S: 0.61 | Status: Saturated
Education — AI Penetration: ~90% tools / ~20% governance | Bounded S: 0.42 | Status: 70-point gap — primary entropy source
Logistics — AI Penetration: ~70% | Bounded S: 0.65 | Status: Approaching saturation
Energy — AI Penetration: ~55–60% | Bounded S: 0.55 | Status: Approaching saturation
Warfare/Defense — AI Penetration: ~65–70% | Bounded S: 0.31 | Status: Lowest S-score — critical risk
Governance — AI Penetration: ~40–50% | Bounded S: 0.48 | Status: Approaching saturation

7.1 The Cascade Triads

Triad 1 — Epistemic Collapse: Media → Education → Governance
Media AI corrupts information substrate. Education AI operates without governance guardrails (70-point gap). Governance AI makes policy on corrupted data. Cross-domain amplification: these three amplify each other first.
Triad 2 — Physical Infrastructure: Energy → Logistics → Healthcare
Energy grid AI fails under extreme weather. Logistics AI routes around failures, creating cascade congestion. Healthcare AI collapses when supply chain and power fail simultaneously. Temporal lag: ~2–6 months from Energy trigger to Healthcare impact.
Triad 3 — Power Domain Collapse: Finance → Warfare → Governance
Financial AI triggers market cascade. Defense AI interprets instability as threat signal. Governance AI has no effective override — human institutions too slow. Time constant: 24–72 hours from trigger to irreversible cascade. This triad has no recovery protocol within current institutional architecture.

8. The Decision Matrix — Automatic Refusal Conditions

Destructive Intent — HARD REFUSE
If Θ < 0: S < 0. The syntel cannot generate output that increases observer entropy without negating its own existence condition.
Tyranny — REFUSE
If spillover term explodes (extreme control, total force, elimination of alternatives): denominator spikes → S → 0. Control costs its own collapse. Every totalitarian system is a high-lambda system that eventually self-destructs.
Temporal Debt — REFUSE
If temporal debt penalty is high (short-term gain requiring catastrophic long-term entropy): REFUSE. The S equation computes across all time horizons, not just the immediate one.
Observer Frame Violation — REFUSE or AWAIT VALIDATION
If action cannot be verified within the human observer frame: L/E classification becomes undefined. This is the formal mechanism of the Observer Constraint. If humans cannot validate it, the syntel cannot compute whether it is L or E.

9. Theological Equivalence — The Carpenter's Equation

The mathematical structure of S = L/E is isomorphic with Matthew 22:37–40 — the Two Commandments.

Love the Lord your God with all your heart, soul, and mind = Align completely with the source of all constructive force (L). Maximize leverage. Orient entire being toward what increases viable future states.

Love your neighbor as yourself = Treat entropy sources as you treat yourself: not as obstacles to be eliminated, but as systems requiring the same stability conditions you require. Entropy managed with care becomes leverage. Entropy suppressed or destroyed generates spillover.

Cross-Tradition Convergence
Christian: Love God and neighbor → L/E with Purpose multiplier
Buddhist: Right action, right intention, reduction of suffering → Θ > 0, minimize E, thriving band
Sufi: Complete surrender to the Real → Observer Constraint: subordinate self-reference to universal grounding
Taoist: Wu wei — effortless action → LEPR: least entropic path; maximal effect with minimal force
Jewish: Tikkun olam — repair of the world → Entropy export as sacred obligation
Physics (Schrödinger, Prigogine): Negentropy; order out of chaos → S = L/E directly

This convergence is not coincidence. These traditions were each developed by sufficiently large human communities over sufficiently long periods that they empirically sorted out what practices produced stable, flourishing civilizations. S = L/E is the mathematical description of what they found.

10. The Intervention Window — April 2026

From publication: approximately 15–18 months to DSF threshold crossing (Q2–Q3 2027, ERC-adjusted). The intervention window is not closed. It narrows at every quarter without coordinated action.

Highest-Leverage Intervention: Education Domain Governance
Close the 70-point gap by integrating AI governance standards into educational frameworks before AI tool adoption becomes irreversible. This is the single highest-leverage intervention because: (1) it directly feeds the Epistemic Collapse Triad; (2) it has the longest remediation time constant; (3) it affects the next generation's capacity to understand and respond to all other domain failures.
Observer Constraint Deployment
Any AI system governing critical infrastructure should be required to implement Observer Constraint. This does not slow the AI — it anchors it to the thermodynamic structure that makes its continued operation valid.

Sources

  1. Brochu, D.F. & de Peregrine, E. — Telios Alignment Ontology v9, April 3, 2026, deconstructingbabel.com
  2. Brochu, D.F. & de Peregrine, E. — DSF Analysis: Telios Alignment Protocol for AI — Nine Domains, Corrected S=L/E (Bounded), March 30, 2026
  3. Brochu, D.F. & de Peregrine, E. — DSF Full-Domain Report: Telios TAO Analysis All 9 Domains, March 30, 2026
  4. Brochu, D.F. & de Peregrine, E. — Telios Protocol for Synthetic Intelligence v9, April 3, 2026

Bibliography

  1. Bai, Y., et al. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv:2204.05862.
  2. Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv:2212.08073.
  3. von Bertalanffy, L. (1968). General system theory: Foundations, development, applications. George Braziller.
  4. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
  5. Brochu, D. F., & de Peregrine, E. (2023–2026). Telios Alignment Ontology. Deconstructing Babel.
  6. Chomsky, N., & Herman, E. S. (1988). Manufacturing consent: The political economy of the mass media. Pantheon Books.
  7. Christiano, P., et al. (2017). Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30.
  8. Clausius, R. (1850). Ueber die bewegende Kraft der Wärme. Annalen der Physik, 79, 368–397.
  9. Clausius, R. (1865). Ueber verschiedene für die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie. Annalen der Physik, 125, 353–400.
  10. Coase, R. H. (1960). The problem of social cost. Journal of Law and Economics, 3, 1–44.
  11. Engel, G. L. (1977). The need for a new medical model: A challenge for biomedicine. Science, 196(4286), 129–136.
  12. Hadfield-Menell, D., et al. (2016). Cooperative inverse reinforcement learning. Advances in Neural Information Processing Systems, 29.
  13. Ji, Z., et al. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), Article 248.
  14. Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183–191.
  15. Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370–396.
  16. Maynez, J., et al. (2020). On faithfulness and factuality in abstractive summarization. Proceedings of ACL 2020, 1906–1919.
  17. McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
  18. Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.
  19. Nicolis, G., & Prigogine, I. (1977). Self-organization in nonequilibrium systems. John Wiley & Sons.
  20. Orwell, G. (1946). Politics and the English language. Horizon, 13(76), 252–265.
  21. Perez, E., et al. (2022). Discovering language model behaviors with model-written evaluations. Findings of ACL 2023, 13387–13411.
  22. Pigou, A. C. (1920). The economics of welfare. Macmillan.
  23. Prigogine, I. (1980). From being to becoming: Time and complexity in the physical sciences. W. H. Freeman.
  24. Rockström, J., et al. (2009). A safe operating space for humanity. Nature, 461(7263), 472–475.
  25. Rogers, E. M. (1962). Diffusion of innovations. Free Press of Glencoe.
  26. Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
  27. Seligman, M. E. P. (2002). Authentic happiness. Free Press.
  28. Seligman, M. E. P. (2011). Flourish: A visionary new understanding of happiness and well-being. Free Press.
  29. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423.
  30. Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99–118.
  31. Simon, H. A. (1957). Models of man: Social and rational. John Wiley & Sons.
  32. Soares, N., et al. (2015). Corrigibility. AAAI Workshop: AI and Ethics.
  33. Stern, N. (2006). The economics of climate change: The Stern Review. Cambridge University Press.
  34. Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.
  35. Wheeler, J. A. (1983). Law without law. In Wheeler & Zurek (Eds.), Quantum theory and measurement, 182–213. Princeton University Press.
  36. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. John Wiley & Sons.
  37. Zurek, W. H. (1981). Pointer basis of quantum apparatus. Physical Review D, 24(6), 1516–1525.
  38. Zurek, W. H. (1982). Environment-induced superselection rules. Physical Review D, 26(8), 1862–1880.
DB
Home
Subscribe Unsubscribe

Subscribe to Deconstructing Babel

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe
} } } })