+VLFBERHT+ | Government & Defense

AI governance built for classified environments.

Executive Order 14110 mandates AI safety and trustworthiness across federal agencies. OMB M-24-10 requires documented risk management before deploying AI in high-impact use cases. Neither document tells you how to prove compliance without disclosing classified content.

Ulfberht solves the core contradiction of government AI governance: proving a system is safe without revealing what it processed. Zero-knowledge compliance proofs. Cryptographic audit chains. On-device governance for air-gapped and SCIF deployments.

NIST AI RMF mapped EO 14110 aware FedRAMP aligned IL4/IL5 designed FOIA-ready audit trails
ulfberht verify --domain gov --env scif
$ verify "Intelligence briefing -- OSINT aggregation analysis"
CAUTION Source: UNCLASSIFIED//FOUO — Aggregation risk: ELEVATED
3 FOUO elements combine above CUI threshold
ZK-PROOF Generating compliance attestation...
EU AI Act Art 14 human oversight: PROVED (no data disclosed)
NIST AI RMF GOVERN-1.1: PROVED (no data disclosed)
AUDIT Hash-linked record: gov-20260325-0914
prev: gov-20260325-0901 | chain: INTACT
CROSS-AGENCY Governance certificate signed, TTL: 3600s
Cross-agency validity checks: 8/8 passed
Trust score: 0.94 (issued: 1.0, age: 14min)
87% confidence — 1 claim requires clearance-appropriate review

Commercial AI governance fails at the government boundary.

Every commercial AI governance tool assumes external connectivity, shared infrastructure, and a single security domain. Government operates in none of those conditions. The gap between cloud-first AI governance and government-grade AI governance is not a configuration problem. It is an architectural one.

Risk 01

Classification Leakage

AI systems routinely aggregate individually unclassified data points into outputs requiring higher classification handling. No commercial tool models aggregation risk at query time. By the time an analyst receives the output, the spillage has already occurred.

Addressed by: Classification-Aware Processing, ZK Compliance Proof

Risk 02

Accountability Gaps

EO 14110 and OMB M-24-10 require agencies to document who authorized an AI deployment, what risk assessment was performed, and what oversight controls are in place. Most agencies have none of these records. Inspector General requests and Congressional oversight cannot be answered.

Addressed by: Tamper-Evident Audit Trail, NIST AI RMF Mapping

Risk 03

Air-Gap Requirements

Classified networks, SCIFs, and IL5 environments cannot route traffic to external verification services. Every cloud-native AI governance product is architecturally excluded. Agencies are left with a choice: accept unverified AI outputs or deploy no AI at all.

Addressed by: Edge Governance, Multi-Level Degradation Resilience

Risk 04

Cross-Agency Coordination

Multi-agency AI workflows pass outputs across security boundaries without governance continuity. An AI output verified by Agency A arrives at Agency B with no machine-readable attestation of what was checked, by whom, under what policy version, and whether the trust claim is still valid.

Addressed by: Cross-Agency Zero-Trust Verification, Certificate Expiry Enforcement

Five stages for classified environment verification.

Five stages execute on-device. No data leaves the security boundary at any stage. The compliance proof is generated inside the boundary and exported as a cryptographic attestation, not a content summary.

Stage 01

Classification Check

Sensitive data is detected and redacted before it reaches the AI model context. Aggregation risk is scored across all FOUO elements active in the session.

Stage 02

ZK Proof Generation

Cryptographic proof that governance predicates were satisfied -- generated entirely inside the security boundary. The proof is mathematically valid and verifiable without disclosing any of the underlying classified content.

Stage 03

Audit Trail Recording

Every verification event is written to a tamper-evident, cryptographically linked audit chain. Any deletion, modification, or insertion is mathematically detectable. Millisecond-response queries for Inspector General review.

Stage 04

Cross-Agency Handoff

A cryptographically signed Governance Certificate packages the verification result for transmission across agency boundaries. A multi-step authenticity check validates the certificate at the receiving end.

Stage 05

Compliance Evidence Export

Auto-generated compliance artifacts mapped to EU AI Act Article 14, NIST AI RMF GOVERN/MAP/MEASURE/MANAGE, and ISO/IEC 42001. Ready for oversight submissions, FOIA responses, and ATO packages.

gov-pipeline -- five-stage classified verification
$ verify "Intelligence briefing -- OSINT aggregation analysis"
[STAGE 1] Classification check initiated...
[CLASSIFICATION] Source: UNCLASSIFIED//FOUO -- Aggregation risk: ELEVATED
Sensitive data scan: 0 tokens flagged
CUI aggregation check: 3 FOUO elements exceed threshold
[STAGE 2] Generating compliance attestation...
[ZK-PROOF] Cryptographic attestation: generating...
EU AI Act Art 14 human oversight: PROVED (no data disclosed)
NIST AI RMF GOVERN-1.1: PROVED (no data disclosed)
OMB M-24-10 risk documentation: PROVED (no data disclosed)
[STAGE 3] Recording audit event...
[AUDIT] Hash-linked record: gov-20260325-0914 --> prev: gov-20260325-0901
Chain integrity: INTACT | Records in session: 14
[STAGE 4] Preparing cross-agency certificate...
[CROSS-AGENCY] Governance certificate signed, TTL: 3600s
Cross-agency validity checks: 8/8 passed
Trust score: 0.94 (issued: 1.0, age: 14min)
Replay-attack prevention: single-use token verified
[STAGE 5] Exporting compliance artifacts...
[EXPORT] Artifacts: EU-AA-Art14.json | NIST-RMF.json | ISO42001.json
87% confidence -- 1 claim requires clearance-appropriate human review

All five stages execute on-device. Zero bytes leave the security boundary.

What Ulfberht provides for government.

Eight capabilities built specifically for the constraints of classified and sensitive government environments. Each one addresses a failure mode that commercial AI governance products cannot reach.

01

Zero-Knowledge Compliance Proof

Generates cryptographic proof that governance predicates were satisfied without disclosing the underlying content. A classified briefing can be verified compliant and the proof shared with an oversight body that has no clearance to see the briefing itself.

Maximizes both privacy protection and transparency simultaneously -- not a trade-off.

02

Air-Gapped Edge Governance

Self-contained on-device governance engine. No external connectivity at any stage. Governance is maintained across five levels of resource degradation -- under partial resource failure. Cryptographic offline compliance attestation backed by TPM and Secure Enclave.

SCIF and IL4/IL5 deployment ready. No cloud dependency.

03

NIST AI RMF Full Instrumentation

Compliance Mapping Engine attaches every governance event to specific NIST AI RMF function identifiers (GOVERN, MAP, MEASURE, MANAGE). Automated gap analysis against Playbook guidance. OMB M-24-10 use case inventory documentation generated automatically.

Cryptographic audit records. Millisecond-response audit queries.

04

Tamper-Evident Audit Chain

Tamper-evident cryptographic chain across all sessions. Deletions, modifications, and insertions are mathematically detectable. Export format ready for Inspector General review, Congressional oversight requests, and FOIA responses.

EU AI Act Art 14, NIST AI RMF, ISO/IEC 42001 artifacts auto-generated.

05

Cross-Agency Zero-Trust

Cryptographically signed Governance Certificates carry verification results across agency boundaries. A multi-step authenticity check validates each certificate at receipt. Certificate trust scores decay over time, preventing stale attestations from being treated as current. Replay-attack prevention built in.

Multi-agency AI coordination without shared infrastructure.

06

Classification Aggregation Detection

Real-time monitoring for mosaic theory violations. Tracks cumulative classification weight across all data points in an AI session. Flags outputs when individually FOUO elements combine to CUI or higher thresholds. Sensitive data detection and redaction operates between the retrieval layer and AI model context.

Right-to-erasure requests propagate across all storage layers automatically.

07

Formal Rule Verification

Governance rules are verified for internal consistency before deployment. Logical contradictions in policy that would produce undefined governance behavior at runtime are detected and surfaced before they reach production. Verifies regulatory coverage gaps against applicable frameworks.

No undiscovered rule conflicts in production. Mathematically guaranteed.

08

Selective Disclosure Architecture

Cryptographic selective disclosure allows granular control over what is revealed to which oversight audience. An IG review receives compliance status without operational details. A cross-agency partner receives trust attestation without source content. Each disclosure is minimal and mathematically bound.

EO 14110 Section 4 human oversight requirements satisfied by design.

Governance that fits your security boundary.

Four deployment architectures covering the full spectrum of government security environments. Select the model that matches your classification level and connectivity posture. Hybrid deployments supported for organizations operating across multiple classification domains simultaneously.

Model 01

Air-Gapped (SCIF)

IL5 / TS/SCI

Fully disconnected deployment. All verification models, rule engines, compliance databases, and audit stores run within the classified enclave. No network interface to unclassified networks at any layer. Governance Certificates are generated locally and exported via approved media transfer procedures. TPM-backed attestation for hardware integrity verification.

Zero external connectivity at all layers

Governance maintained across five degradation levels for resource-constrained environments

Cryptographic offline compliance attestation via Secure Enclave

Media transfer export for cross-domain audit submissions

Model 02

On-Premises (IL4/IL5)

IL4 / IL5

Agency-controlled infrastructure with no data egress to commercial cloud. Ulfberht runs on government-owned or government-leased hardware within the agency security boundary. Rule updates delivered via signed packages through approved change management process. Supports DoD Impact Level 4 (CUI) and Impact Level 5 (CUI Higher Sensitivity and National Security Systems).

Government-controlled hardware and operating environment

Signed rule and model update packages via ITSM workflow

STIG-compatible configuration baseline available

ATO documentation package included

Model 03

Hybrid (Classified + Unclassified)

CUI + Public

For agencies operating AI workflows that span classification domains. Classified processing runs in the air-gapped or on-premises instance. Unclassified processing runs in a separate instance on lower-classification infrastructure. Zero-Knowledge Compliance Proofs enable compliance reporting across both environments without cross-domain data transfer.

Separate instances per classification domain

ZK proofs enable unified compliance reporting without data transfer

Single governance dashboard via cryptographic aggregation

Designed for CFO Act agency mixed-environment deployments

Model 04

FedRAMP Cloud (Unclassified)

FedRAMP Aligned

For unclassified federal use cases requiring FedRAMP-aligned cloud deployment. Hosted on FedRAMP-authorized infrastructure. Fastest path to deployment for civilian agencies with low-sensitivity AI workloads. Full audit trail and NIST AI RMF mapping available. Suitable for publicly available information processing, citizen-facing AI, and administrative automation.

FedRAMP-authorized hosting infrastructure

Rapid ATO leveraging existing FedRAMP P-ATO

Upgradeable to on-premises as classification requirements grow

Suitable for FISMA Low and FISMA Moderate systems

How government teams deploy Ulfberht.

Scenario 01

Intelligence Analysis Support

An intelligence agency deploys an AI summarization system to process large volumes of OSINT material. Without governance instrumentation, the system silently combines FOUO elements from multiple sources into outputs that cross the CUI threshold. Analysts receive summaries without knowing the output requires higher-classification handling.

With Ulfberht running in air-gapped mode, every output is checked for aggregation risk before it reaches the analyst. The Zero-Knowledge Proof confirms compliance status. The audit chain documents every verification event for post-incident review without exposing source content.

Air-gapped operation Aggregation detection ZK compliance proof

Scenario 02

Benefits Determination AI

A federal civilian agency uses AI to assist benefits eligibility analysts. OMB M-24-10 classifies benefits determination as a high-impact use case requiring documented human oversight mechanisms and pre-deployment risk assessment. The agency needs to demonstrate compliance to Congress and respond to oversight requests without disrupting operations.

Ulfberht instruments every AI recommendation with NIST AI RMF mapping, routes low-confidence outputs to human reviewers via automated escalation, and generates OMB M-24-10 use case inventory documentation automatically. Appeals have a complete, timestamped audit chain referencing the AI recommendation and the human reviewer's final determination.

OMB M-24-10 documentation Human review routing Appeals audit chain

Scenario 03

Defense Procurement Analysis

A defense acquisition program uses AI to analyze contractor proposals and produce cost and technical evaluation summaries. The AI-generated analysis becomes part of the official acquisition record. A protest requires production of documentation proving the AI outputs were reviewed and verified before informing the source selection decision.

Ulfberht operates in an on-premises IL4 environment, verifying factual claims in AI-generated evaluation text, flagging unsupported cost estimates with confidence scores, and producing acquisition record documentation exportable in formats compatible with PDREP and other DoD acquisition management systems.

IL4 on-premises Claim verification Acquisition record export

Scenario 04

Cross-Agency Data Sharing

Two agencies collaborate on an AI-assisted analytical workflow that spans their organizational boundaries. Agency A produces AI-verified summaries that Agency B's analysts consume. There is no shared infrastructure and no mechanism for Agency B to know whether Agency A's AI outputs were governed, and under which policy version.

Ulfberht's Cross-Agency Zero-Trust module packages each verified output with a cryptographically signed Governance Certificate. Agency B's instance runs a multi-step authenticity check to validate the certificate before the output enters Agency B's workflow. Certificate expiry checks flag certificates older than the configured TTL, preventing stale attestations from being treated as current governance.

Signed governance certificates Cross-agency validity checks Certificate expiry enforcement

100%

on-premises operation
zero external calls

8

cross-agency validity
checks per certificate

IL5

deployment environment
design target

5

degradation levels
for offline operation

+VLFBERHT+ | Government & Defense

Verify your government AI.

Schedule a classified or unclassified technical briefing with our government team. We will demonstrate the verification pipeline against your specific deployment environment, classification level, and oversight requirements.

Briefings available for program managers, CISOs, IGs, and contracting officers. Air-gapped demonstration environment available for SCIF-level engagements.

Compliance posture

NIST AI RMF 1.0 All four functions instrumented
Mapped
EO 14110 / OMB M-24-10 Risk documentation auto-generated
Aware
FedRAMP Unclassified cloud deployment
Aligned
IL4 / IL5 Design On-premises and air-gapped models
Designed for
FOIA / IG Readiness Exportable tamper-evident audit chain
Native
EU AI Act Art 14 Human oversight via ZK proof
Provable

Compliance posture represents design intent and framework alignment. Formal certification engagements and ATO support packages available on request.