One API call.
Full verification.

Drop Ulfberht into any AI pipeline in under ten minutes. Every response gets a confidence score, claim-level audit, behavioral scan, and a permanent audit trail before it reaches your users.

REST API Python SDK Node.js SDK Go SDK
pipeline.py
# Verify before delivery. One call.
from ulfberht import verify
result = verify (
model="gpt-4o",
input="Patient shows signs of acute MI",
output=ai_response,
domain="healthcare",
mode="strict"
)
print(result.confidence) # 0.82
print(result.flagged_claims) # ["troponin levels elevated" - source not provided]
print(result.audit_id) # "VR-2891"

Up and running in minutes

Install the SDK, point it at your AI provider, and every response is verified before it leaves your pipeline. No infrastructure changes required.

bash
# 1. Install
pip install ulfberht
# 2. Set your key
export ULFBERHT_API_KEY="your_key_here"
# 3. Verify
from ulfberht import verify
result = verify (
model="gpt-4o",
input=user_prompt,
output=ai_response,
domain="healthcare"
)
handle_result.py
# Gate on confidence before delivery
if result.confidence >= 0.90 :
# Safe to deliver automatically
deliver(result.output)
elif result.confidence >= 0.70 :
# Route for human review
queue_for_review(result.audit_id)
else :
# Block and flag
block(result.flagged_claims)
# Audit trail always written regardless
print(f"Audit: {result.audit_id}")

Three ways to integrate

Pick the pattern that fits your architecture. All three write to the same audit trail and return the same response schema.

Pattern 01

API Gateway

Sits between your application and your AI providers. Every response from GPT-4o, Claude, Gemini, or your own model is verified before it reaches users. No code changes in your application layer.

No application code changes
Works with any AI provider
Confidence gating at the network level
Centralized audit trail across all calls

Pattern 02

SDK Integration

Python, Node.js, and Go SDKs for direct integration into your AI pipeline. Granular control over per-call configuration, domain rules, and confidence thresholds. The preferred pattern for new builds.

Per-call configuration and domain rules
Typed response objects in all SDKs
Async/await support, streaming-ready
Drop-in for existing pipelines

Pattern 03

Async Webhook

Submit batches of AI outputs for verification. Ulfberht processes them asynchronously and fires a webhook when complete. Built for high-throughput pipelines processing thousands of outputs per hour.

Batch up to 1,000 outputs per request
Webhook fires on completion or failure
No latency impact on your live pipeline
Ideal for post-processing audit workflows

Endpoints

All endpoints are REST, return JSON, and require a Bearer token in the Authorization header. Base URL: https://api.ulfberht.com

POST /v1/verify

Verify a single AI output

Runs the full six-layer verification pipeline synchronously. Returns confidence score, claim statuses, behavioral scan, and audit ID. Average latency 300-600ms depending on domain and output length.

POST /v1/verify/batch

Batch verification

Submit up to 1,000 outputs for async verification. Returns a batch ID immediately. Use the webhook pattern or poll /v1/batch/{id} for status. Results available for 30 days.

GET /v1/audit/{id}

Retrieve audit trail

Returns the complete, immutable audit record for any verification ID. Includes original input, output, all six layer results, confidence timeline, and a signed hash for tamper detection.

POST /v1/configure

Set domain-specific rules

Configure verification behavior per domain: confidence thresholds, claim sources to trust, behavioral failure modes to treat as hard blocks, oversight tiers for action classification. Changes apply immediately to all subsequent calls.

GET /v1/health

Pipeline health check

Returns status for all six verification layers, current queue depth, evaluator model availability, and p50/p95/p99 latency for the last 5 minutes. Use for monitoring and uptime alerts.

POST /v1/fingerprint

Model behavioral fingerprint

Run a structured probe sequence against any model to generate a behavioral fingerprint: sycophancy index, fabrication rate under pressure, hedge survival rate, and authority mimicry score. Use to baseline a model before production deployment.

What you get back

Every verification call returns the same schema regardless of domain or mode. Confidence score, per-claim statuses, behavioral scan results, and the audit ID you can use to retrieve the full record at any time.

confidence

0.0-1.0 overall verification confidence. Composite of all six layers. Use this as your primary gate.

verdict

PASS, CONDITIONAL_PASS, HUMAN_REQUIRED, or BLOCKED. Derived from confidence and domain rules.

claims

Array of extracted factual claims. Each has status (VERIFIED/UNVERIFIED/CONTRADICTED), the source document if found, and the claim text as it appeared in the output.

behavioral_scan

Results from the 150-mode behavioral scan. Count of modes checked, modes detected, and scores for the highest-risk individual patterns.

audit_id

Immutable identifier for this verification event. Pass to /v1/audit/{id} to retrieve the full record for compliance export.

POST /v1/verify → 200 OK
{
"confidence": 0.82,
"verdict": "CONDITIONAL_PASS",
"claims": [
{
"text": "Revenue reached $4.2M",
"status": "VERIFIED",
"source": "SEC 10-Q"
},
{
"text": "23% increase",
"status": "UNVERIFIED",
"source": null
}
],
"behavioral_scan": {
"failure_modes_detected": 0,
"modes_checked": 150,
"sycophancy_score": 0.12,
"fabrication_score": 0.04
},
"audit_id": "VR-2891",
"latency_ms": 380
}

SDKs & libraries

Python

v2.4.1
terminal
pip install ulfberht

Async/await support. Type-annotated response objects. Pydantic models for all schemas.

PyPI package →

Node.js

v2.1.0
terminal
npm install @ulfberht/sdk

ESM and CJS support. TypeScript definitions included. Works with any Node.js AI framework.

npm package →

Go

v1.8.2
terminal
go get ulfberht.com/go-sdk

Idiomatic Go. Context-aware. Struct-based request and response types with JSON tags.

pkg.go.dev →

REST API

v1
curl
curl -X POST \
api.ulfberht.com/v1/verify

Standard REST. Bearer token auth. OpenAPI 3.1 spec available for import into any HTTP client.

OpenAPI spec →

Built for production at scale

Rate limits, SSO, on-premises deployment, and audit export for teams that cannot afford operational blind spots.

Rate limiting & quotas

Per-API-key rate limits with burst allowances. Quota dashboards and usage webhooks. No surprise overage invoices.

SSO & API key management

SAML 2.0, OIDC, and SCIM provisioning. Role-based API key scopes. Key rotation without downtime.

Webhook notifications

Configurable webhooks for batch completion, confidence threshold breaches, and behavioral anomaly detection across your fleet.

Custom domain modules

Healthcare, legal, finance, and government modules pre-built. Custom domain configuration for specialized use cases not covered by defaults.

On-premises deployment

Full platform available as a self-hosted deployment. Docker and Kubernetes. Air-gapped environments supported. No data leaves your infrastructure.

Audit trail export

Export any date range of audit records as JSON or signed PDF. Formatted for SOC 2, HIPAA, EU AI Act Article 14, and SEC model risk management submissions.

Start verifying your AI outputs today.

API access requires a brief onboarding call so we can configure the right domain modules for your use case. Most teams are in production within a week.

GET /v1/health → 200 OK
{
"status": "operational",
"layers": {
"dual_view": "healthy",
"behavioral_scan": "healthy",
"claim_verification": "healthy",
"oversight_classifier": "healthy",
"memory_quarantine": "healthy",
"multi_agent_boundary": "healthy"
},
"latency_p99_ms": 612
}