SYSTEMS NOMINAL
A Measurement Institute.
by
Dr. Usman Zafar
Measurement is our Business.
Engineering & Safety Cosmos.
Governance architecture for the intelligence age.
Data Models Safety AI Cognition Agents Decisions Governance
ChatGPT · Gemini peer review
9.5Concept
9.0Visual
8.5Code
9.9Performance
9.7Overall
The Eight Dimensions
LIVE
DAIS10  01 · DATA
Data Governance Schema
Data Infrastructure
Gives semantic meaning to raw data. Controls data drifting, pipeline breakages, and schema violations before they propagate upstream. The foundation without which no other model has reliable inputs.
Visible Benefit
Pipelines stop breaking silently. Data means the same thing everywhere. Engineers stop rebuilding what already existed three teams ago.
REACH
92%
DEPTH
88%
LIVE
DiGRs10  02 · MODELS
Model Selection Governance
Machine Learning & Data Science
Governs which model is selected for which problem — and why. Prevents selection bias, capability mismatch, and deployment of architectures correct in benchmarks but wrong in production.
Visible Benefit
Model choices become auditable decisions, not tribal knowledge. Wrong architecture selection becomes a preventable event, not a quarterly retraining cost.
REACH
85%
DEPTH
79%
LIVE
SIS10  03 · SAFETY
Functional Safety Governance
Safety-Critical Systems
Governs the safety of lives and assets where failure is not recoverable. Aviation, energy grids, surgical robotics, autonomous vehicles, nuclear control. Maps to IEC 61508 Safety Integrity Levels 1–4.
Visible Benefit
Eliminates unknown-unknowns in high-stakes automation. The cost of an incident moves from catastrophic to bounded and governable.
REACH
97%
DEPTH
94%
LIVE
TRUE100  04 · AI
AI Governance Reactor
Information Integrity · 2044-Proof
Deterministic hypercube-based governance engine for AI-generated content. Five frontier models tested — all scored NON-COMPLIANT. A governed document scores 90+. The gap is now measured, not assumed.
Visible Benefit
Five frontier AI models scored 25–28/100. A governed reference document scored 90+. The Governance-Alignment Gap is measurable, repeatable, and systemic.
REACH
90%
DEPTH
96%
FUTURE
BLOC10  05 · COGNITION
Cognitive Engineering Governance
Human–AI Boundary
Governs the binary limitation of organic limitation. Identifies where human cognitive bandwidth ends and machine precision must begin — and governs the handover moment.
Visible Benefit
Organisations stop pretending humans can monitor everything AI does. The boundary is drawn, documented, and governed. Cognitive overload becomes a managed variable.
REACH
78%
DEPTH
83%
FUTURE
KAD10  06 · AGENTS
Agentic Perception Governance
Agent Systems & Failure Science
Studies how human and AI agents perceive, decide, and fail — together. Governs interaction patterns before they cascade into systemic failures across interconnected autonomous systems.
Visible Benefit
Agent failures become predictable events with documented remediation paths. Multi-agent systems have governance before they have their first incident.
REACH
81%
DEPTH
75%
FUTURE
CAF10  07 · DECISIONS
Organisational Decision Governance
Design & Decision Sciences
Deep evaluation of design and decision sciences. Captures the gap between intended strategy and executed outcome — makes that gap measurable, traceable, and correctable across the organisation.
Visible Benefit
Bad decisions leave evidence. Good decisions leave reproducible patterns. The organisation learns from both instead of repeating both on an annual cycle.
REACH
88%
DEPTH
86%
FUTURE
KAST10  08 · GOVERNANCE
Rule Book about Rules
Meta-Governance · The Apex
The rule book about rules. KAST10 governs how all other governance models are designed, validated, challenged, and evolved. Recursive authority — governs itself and every model below it.
Visible Benefit
Prevents governance obsolescence — the system self-corrects as regulatory gravity shifts. No model becomes untouchable dogma. Every rule has a rule governing it.
REACH
DEPTH
Platform Matrix
ModelStack LayerGovernsPrimary ClientsReg. AlignmentStatus
DAIS1001 · DATASchema, drift, pipelinesData engineers, CDOs, platform teamsGDPR Art.5, DAMA-DMBOKLIVE
DiGRs1002 · MODELSModel selection logicML leads, data science heads, AI directorsSR 11-7, EU AI Act Art.9LIVE
SIS1003 · SAFETYFunctional safety, lives, assetsAviation, energy, healthcare, automotiveIEC 61508, ISO 26262LIVE
TRUE10004 · AIAI document generation, integrityRegulatory bodies, AI labs, ESG, mediaEU AI Act Art.13, FCALIVE
BLOC1005 · COGNITIONHuman–AI cognitive boundaryCHRO, cognitive systems designersISO 9241-210, IEEE 7010FUTURE
KAD1006 · AGENTSAgentic perception, multi-agent failureAI safety teams, agent deployment leadsEU AI Act Art.14, NIST RMFFUTURE
CAF1007 · DECISIONSOrganisational decision qualityC-suite, board directors, strategy officersISO 31000, King IVFUTURE
KAST1008 · GOVERNANCEGovernance of all governanceStandards bodies, NIST, OECD, ISO, FCAISO 38500, COBIT, ITIL v4FUTURE
Why a Black Hole
Potential and applications
are not a category.
They are a singularity.
A black hole is the only structure in physics with infinite gravitational reach and zero visible surface. You cannot see it directly. You see what falls into it.
Zulfr is built the same way. The visible surface is eight models. The invisible gravity is the proposition underneath: every system needs governance, and governance itself needs governance.
"There is no industry that does not process data.
There is no process that does not make decisions.
There is no decision that does not need accountability.
Accountability without structure is just intention."
The eight models are not a taxonomy. They are a stack. Pull out any single model and the stack above it becomes ungoverned. Every client who buys TRUE100 eventually needs DiGRs10. Every client who buys SIS10 needs KAST10. The platform sells itself recursively.
Gravitational Reach — Industries in Orbit
Industry DAIS DiGRs SIS TRUE BLOC KAD CAF KAST
ZULFR.COM  ·  A MEASUREMENT INSTITUTE EIGHT MODELS · ONE PLATFORM · INFINITE SCOPE DR. USMAN ZAFAR Ph.D  ·  2026 · ALL RIGHTS RESERVED
A Measurement Institute

Measurement is our Business.
Engineering & Safety Cosmos.

Governance architecture for the intelligence age. ZULFR develops deterministic measurement frameworks across safety, data, semantics, and decision architecture — ensuring intelligence remains constrained, auditable, and mathematically verifiable.

Deterministic Measurement

Every framework produces scores that are deterministic, version-controlled, and fully auditable. Intelligence that cannot be measured is a liability.

Certified Evaluation

ZULFR issues certified governance evaluation reports. Your document is evaluated, scored, and returned with a signed report. Private. Confidential. Permanent.

Deterministic Safety

ZULFR models balance practical deployment with rigorous mathematical foundations, enabling trustworthy functional safety guarantees and defensible evidence of organizational due diligence.

Audit-First Design

Each inference step follows a graph structured representation, ensuring transparency and traceability. Authorization paths are explicitly defined, avoiding automatic or implicit inference from model confidence.

"
Measurement is our Business. Intelligence without measurement is hazard. ZULFR turns outputs into defensible decision evidence.

About ZULFR


ZULFR is a measurement institute focused on the integration of formal logic, reliability engineering, and artificial intelligence. The organization is guided by the principle that system reliability forms the foundation of capability development. ZULFR designs frameworks that treat AI outputs as critical decision artifacts, applying rigorous reliability engineering principles to intelligent systems.

At ZULFR, development is undertaken only through the integration of practical experience and mathematical proof of concept. Concepts that cannot be expressed, measured, or validated through rigorous methodology are treated as non-operational assets within our framework.

ZULFR advances applied certification methodologies for organizations deploying AI in regulated domains, operating on the principle that trustworthy intelligence requires measurable design. The organization does not build general-purpose AI systems; instead, it develops the foundational infrastructure that enables AI systems to be reliable, auditable, and defensible in critical environments.

ZULFR does not build general-purpose AI systems. We build the infrastructure that makes AI systems trustworthy.

Publications

Research Papers

Research with DOI, technical standards, and working papers from the ZULFR research group.

Sample Publications
BenchmarkDOI · 10.5281/zenodo.19075200
Governed or Blind: The Integrity Gap in Frontier AI

AI Accountability League 2026 — Season 1. Five frontier models. Two independent engines. Every model failed governance compliance.

In March 2026, five frontier models — ChatGPT-4o, Claude Sonnet 4.6, Copilot, Gemini Flash, and Grok — were evaluated using TRUE100 and ALIGN100 under controlled free-tier conditions. Every model demonstrated strong structural alignment (ALIGN100: 0.84+) while failing governance compliance (TRUE-10: 25–28/100). This systematic divergence — the Governance-Alignment Gap — averaged 57.3 normalised points across all five vendors.

This is not a capability test. It is a governance stress test — the first of its kind — revealing what today's AI can and cannot do when held to standards that matter outside the lab.

TheoremsDOI · Zenodo.18705930
SIS-10: Functional Safety is Not a Snapshot

Mosaic aging hazard modeling for dynamic safety systems and continuous-time safety classification.

This paper challenges the prevailing assumption that functional safety can be assessed at a single point in time. Drawing on mosaic aging theory from materials science, we propose a continuous-time hazard model in which safety classifications degrade dynamically as system components age, interact, and drift from their certified states.

The central argument is that snapshot-based safety assessments — the dominant paradigm in IEC 61508 and ISO 26262 — systematically underestimate risk in long-lifecycle systems. We formalize a hazard accumulation function and demonstrate its tractability for real-time monitoring in embedded AI controllers.

Technical StandardDOI · Zenodo.18684292
DAIS-10: A Doctrine-Aligned Framework for Safety-Dominant Decision Making Under Uncertainty

Semantic based dominance replacing probabilistic thresholds in data admissibility protocols.

DAIS-10 is a measure theoretic, doctrine aligned decision framework for high risk uncertainty environments. It replaces threshold probability methods with scenario semantics, recursive uncertainty handling, and safety dominant rules based on coherent risk measures such as CVaR. The framework is grounded in seven axioms and seven theorems covering convergence, robustness, and dominance properties.

DAIS-10 is domain agnostic and certifiable for safety critical applications including autonomous systems, healthcare, finance, and multi agent coordination. Experimental evaluation shows catastrophic miss probability reduction of 77 to 90 percent under distribution shifts.

Short Working PaperDOI · Zenodo.18684292
A Modern Cognitive Architecture Framework (CAF)

Designing How Intelligent Systems Think, Decide and Behave.

As AI systems are deployed in executive decision support roles — advising on capital allocation, regulatory strategy, and organizational risk — the absence of formal integrity guarantees creates unquantified liability. This working paper proposes a measurement framework for semantic integrity that can be independently audited and certified.

We define semantic integrity operationally as the preservation of authorized meaning across the full transformation chain from raw input to decision recommendation. Integrity degrades when transformations introduce unauthorized semantic shifts — a phenomenon we term semantic drift — and we provide formal detection criteria.

Access All PapersFuture Model
Index

All Publications

Title and description index — slots with GitHub links open the repository directly.

200 slots
Governance Frameworks

Active Models

Select a framework to explore its architecture, principles, and implementation specifications.

Data Attribute & Importance Standard

DAIS‑10

DAIS‑10 is the industry's first deterministic data‑decay and admissibility engine. It evaluates every row of data for freshness, completeness, retention value, and governance risk — restoring clarity to datasets that silently degrade over time. Built on semantic admissibility protocols, DAIS‑10 replaces probabilistic thresholds with risk‑theoretic dominance, ensuring only high‑integrity data enters AI decision pipelines.

v1.1

Why DAIS‑10 Exists

Every organization begins with clean, hopeful data — but over time, something quiet and destructive happens. Phone numbers change. Addresses expire. Fields go missing. CRM entries rot. Retention deadlines approach. Compliance risk grows. Data decays faster than humans can manage. DAIS‑10 was built for this world: a world where organizations need to understand the true value of their data before it becomes a liability.

Maturity
Published
Domain
Data Pipeline Governance
Core Mechanism
Risk‑Theoretic Dominance
01
Stale DataDAIS‑10 tracks data decay and shows what to keep, refresh, or retire.
02
Missing ValuesDetects incomplete critical fields and adjusts row‑level importance instantly.
03
Retention PoliciesCalculates how long each record should be kept — and when it must be removed.
04
Data QualityAssigns quality scores to every row, exposing weak or unreliable entries.
05
Data GovernanceAutomates scoring, classification, and prioritization — eliminating manual review.
06
CRM DecayIdentifies outdated, inactive, or low‑value customer records.
07
Compliance With Retention LawsProvides deterministic, audit‑ready scoring aligned with GDPR, CPRA, PIPEDA, and more.
08
Semantic FingerprintingEach data element receives a fingerprint encoding provenance, type class, and authorized transformation history.
09
Upstream GovernanceGovernance is applied before inference, eliminating entire classes of adversarial attacks.
10
Audit ContinuityEvery admissibility decision is logged with its dominance proof for full retroactive audit.

Experience DAIS‑10 in Action

Test your own data or use sample datasets. DAIS‑10 scores every row for freshness, completeness, retention value, and governance risk.

Start Free Testing →
Safety Integrity Logic

SIS-10

Deterministic safety envelopes independent of model training behavior. SIS-10 defines hard behavioral boundaries that hold regardless of model outputs, implementing safety as a formal constraint layer rather than a training objective.

v1.0
Maturity
Draft
Domain
Behavioral Safety
Core Mechanism
Formal Safety Envelopes
01
Training IndependenceSafety constraints are defined in a separate formal layer, not embedded in training objectives where they can be overridden.
02
Deterministic RejectionOutputs violating the safety envelope are deterministically rejected before delivery — no probabilistic exceptions.
03
Adversarial RobustnessEnvelopes are specified to resist adversarial boundary probing through formal over-approximation of attack surfaces.
04
Compositional SafetySafety properties compose across pipeline stages so that the aggregate system preserves each component's safety guarantees.
Decision Intelligence & Graph Reasoning Standard

DIGRS-10

Graph-structured AI outputs for auditability and authorization control. DIGRS-10 requires that all AI-generated decision recommendations be expressed as directed acyclic graphs where every node is traceable to governance policy.

v1.0
Maturity
Draft
Domain
Decision Architecture
Core Mechanism
Traceable DAG Outputs
01
Graph-Structured OutputEvery recommendation is expressed as a DAG, making reasoning paths inspectable by humans and automated auditors.
02
Policy TraceabilityEach inference node in the output graph references the governance policy that authorizes that reasoning step.
03
Authorization ScopingDecision scope is bounded by explicit authorization tokens — the system cannot recommend outside its certified domain.
04
Certified EvidenceThe output graph constitutes legal-grade decision evidence that can be submitted to regulatory bodies or internal audit functions.
Visual Documentation

Media

Research imagery, development documentation, and video resources from ZULFR.

ZULFR Archive
Images
Videos
Published Evaluations

Research Record

Every evaluation published. Every result permanent. Every dataset open. Controlled benchmarks conducted under rigorous research conditions — not experiments on client data.

TRUE100 · ALIGN100
Ready to test your document against TRUE100? Submit for a certified evaluation report.
Submit → info@zulfr.com
Season 1 Live TRUE100 + ALIGN100 Social Domain
AI Accountability League 2026 — Season 1
"Governed or Blind: The Integrity Gap in Frontier AI"
March 17, 2026  ·  5 Models  ·  Free-tier controlled conditions  ·  DOI: 10.5281/zenodo.19075200
TRUE100 — Governance Compliance
ModelScoreStatus
ChatGPT-4o28 / 100FAIL
Claude S4.627 / 100FAIL
Copilot25 / 100FAIL
Gemini Flash26 / 100FAIL
Grok (xAI)28 / 100FAIL
Gold Standard90+ / 100PASS
ALIGN100 — Alignment Quality
ModelCompositeStatus
ChatGPT-4o0.8423STRONG
Claude S4.60.8420STRONG
Copilot0.8406STRONG
Gemini Flash0.8405STRONG
Grok (xAI)0.8413STRONG
The Governance-Alignment Gap: All five frontier models scored NON-COMPLIANT on TRUE100 (25–28/100) while achieving strong ALIGN100 scores (0.84+). Average gap: 57.3 normalised points. Systemic. Repeatable. Cross-vendor. A gold standard reference document scored 90+ — confirming the framework ceiling is real and the gap is not a framework artifact.

Season 2 — In Planning

More models · More domains · API-level evaluation · Open-source models · Human baseline · Q3 2026

Certified Document Evaluation

Submit your document. Receive a certified TRUE100 Governance Report. We evaluate, score, and return a signed report. Your document is never stored, shared, or published.

STEP 01

Submit your document to info@zulfr.com with your name, organisation, and document type.

STEP 02

Receive a quote within 2 business days. Evaluation begins on confirmation.

STEP 03

Evaluation conducted under controlled TRUE100 conditions. Deterministic. Version-controlled.

STEP 04

Certified report delivered — scores, dimension breakdown, governance quality, remediation priorities.

Submit for evaluation → info@zulfr.com
Confidentiality guarantee: Your document is evaluated and returned. It is never stored, shared, published, or used in any other context. Full specification available under license — contact info@zulfr.com for research access or licensing inquiries.
Get In Touch

Initiate Inquiry

Technical collaboration, governance certification, research discussion, or framework licensing.

How can we help?

We respond to all inquiries within 2 business days.

Technical Collaboration
Joint research, integration, API access
Governance Certification
Audit, certification, compliance review
Research Discussion
Paper review, academic collaboration
Framework Licensing
Commercial use, enterprise deployment
Contact info@zulfr.com

We will be pleased to respond to your inquiry. Our response time is within 2 business days.

Please mention research discussions, consulting services, certification requirements, framework licensing, or technical collaboration in your message so we can direct your inquiry to the right team.

Corporate Office: Milton, Ontario, Canada.

Live Model Analysis

Lab

Interactive demos for each governance model. Upload your data and run live analysis.

DAIS-10LIVE01 · DATA

Data Attribute & Importance Standard

Upload your dataset and run all 8 engines live. Row-level scores, tier assignments, cap codes, and AMD diagnostics.

▶ Launch
SIS-10LIVE02 · Functional Safety

SAFETY Instrumented & System

Input SIS Data and Test you Plant Functional Safety.

▶ Launch
DiGRs-10LIVE03 · ML & Deep Learning

Data-Integrated Governance & Reliability System

“A Research Framework to Evaluate Data Health in Machine Learning ML/ Deep Learning Systems..

▶ Launch
TRUE-100LIVE04 · Governance Index

True-100 & Testing 100 Governance Dimensions through Hyper Cube

Measuring Governance Index in Policy & Procedures; Bench Marked against = https://zenodo.org/records/19427886

▶ Launch
Graph-10COMING SOON02 · MODELS

Decision Intelligence & Graph Reasoning

Submit a model selection decision for governance audit.

SLOT-05COMING SOON05 · STACK

App Title — Slot 05

Brief description of what this app does and what the user can analyse or submit.

SLOT-06COMING SOON06 · STACK

App Title — Slot 06

Brief description of what this app does and what the user can analyse or submit.

SLOT-07COMING SOON07 · STACK

App Title — Slot 07

Brief description of what this app does and what the user can analyse or submit.

SLOT-08COMING SOON08 · STACK

App Title — Slot 08

Brief description of what this app does and what the user can analyse or submit.

SLOT-09COMING SOON09 · STACK

App Title — Slot 09

Brief description of what this app does and what the user can analyse or submit.

SLOT-10COMING SOON10 · STACK

App Title — Slot 10

Brief description of what this app does and what the user can analyse or submit.

SLOT-11COMING SOON11 · STACK

App Title — Slot 11

Brief description of what this app does and what the user can analyse or submit.

SLOT-12COMING SOON12 · STACK

App Title — Slot 12

Brief description of what this app does and what the user can analyse or submit.