Developer Platform
One API call.
Full verification.
Drop Ulfberht into any AI pipeline in under ten minutes. Every response gets a confidence score, claim-level audit, behavioral scan, and a permanent audit trail before it reaches your users.
Quick Start
Up and running in minutes
Install the SDK, point it at your AI provider, and every response is verified before it leaves your pipeline. No infrastructure changes required.
Integration Patterns
Three ways to integrate
Pick the pattern that fits your architecture. All three write to the same audit trail and return the same response schema.
Pattern 01
API Gateway
Sits between your application and your AI providers. Every response from GPT-4o, Claude, Gemini, or your own model is verified before it reaches users. No code changes in your application layer.
Pattern 02
SDK Integration
Python, Node.js, and Go SDKs for direct integration into your AI pipeline. Granular control over per-call configuration, domain rules, and confidence thresholds. The preferred pattern for new builds.
Pattern 03
Async Webhook
Submit batches of AI outputs for verification. Ulfberht processes them asynchronously and fires a webhook when complete. Built for high-throughput pipelines processing thousands of outputs per hour.
API Reference
Endpoints
All endpoints are REST, return JSON, and require a Bearer token in the Authorization header. Base URL: https://api.ulfberht.com
Verify a single AI output
Runs the full six-layer verification pipeline synchronously. Returns confidence score, claim statuses, behavioral scan, and audit ID. Average latency 300-600ms depending on domain and output length.
Batch verification
Submit up to 1,000 outputs for async verification. Returns a batch ID immediately. Use the webhook pattern or poll /v1/batch/{id} for status. Results available for 30 days.
Retrieve audit trail
Returns the complete, immutable audit record for any verification ID. Includes original input, output, all six layer results, confidence timeline, and a signed hash for tamper detection.
Set domain-specific rules
Configure verification behavior per domain: confidence thresholds, claim sources to trust, behavioral failure modes to treat as hard blocks, oversight tiers for action classification. Changes apply immediately to all subsequent calls.
Pipeline health check
Returns status for all six verification layers, current queue depth, evaluator model availability, and p50/p95/p99 latency for the last 5 minutes. Use for monitoring and uptime alerts.
Model behavioral fingerprint
Run a structured probe sequence against any model to generate a behavioral fingerprint: sycophancy index, fabrication rate under pressure, hedge survival rate, and authority mimicry score. Use to baseline a model before production deployment.
Response Schema
What you get back
Every verification call returns the same schema regardless of domain or mode. Confidence score, per-claim statuses, behavioral scan results, and the audit ID you can use to retrieve the full record at any time.
confidence
0.0-1.0 overall verification confidence. Composite of all six layers. Use this as your primary gate.
verdict
PASS, CONDITIONAL_PASS, HUMAN_REQUIRED, or BLOCKED. Derived from confidence and domain rules.
claims
Array of extracted factual claims. Each has status (VERIFIED/UNVERIFIED/CONTRADICTED), the source document if found, and the claim text as it appeared in the output.
behavioral_scan
Results from the 150-mode behavioral scan. Count of modes checked, modes detected, and scores for the highest-risk individual patterns.
audit_id
Immutable identifier for this verification event. Pass to /v1/audit/{id} to retrieve the full record for compliance export.
Libraries
SDKs & libraries
Python
v2.4.1Async/await support. Type-annotated response objects. Pydantic models for all schemas.
PyPI package →Node.js
v2.1.0ESM and CJS support. TypeScript definitions included. Works with any Node.js AI framework.
npm package →Go
v1.8.2Idiomatic Go. Context-aware. Struct-based request and response types with JSON tags.
pkg.go.dev →REST API
v1Standard REST. Bearer token auth. OpenAPI 3.1 spec available for import into any HTTP client.
OpenAPI spec →Enterprise
Built for production at scale
Rate limits, SSO, on-premises deployment, and audit export for teams that cannot afford operational blind spots.
Rate limiting & quotas
Per-API-key rate limits with burst allowances. Quota dashboards and usage webhooks. No surprise overage invoices.
SSO & API key management
SAML 2.0, OIDC, and SCIM provisioning. Role-based API key scopes. Key rotation without downtime.
Webhook notifications
Configurable webhooks for batch completion, confidence threshold breaches, and behavioral anomaly detection across your fleet.
Custom domain modules
Healthcare, legal, finance, and government modules pre-built. Custom domain configuration for specialized use cases not covered by defaults.
On-premises deployment
Full platform available as a self-hosted deployment. Docker and Kubernetes. Air-gapped environments supported. No data leaves your infrastructure.
Audit trail export
Export any date range of audit records as JSON or signed PDF. Formatted for SOC 2, HIPAA, EU AI Act Article 14, and SEC model risk management submissions.
Get Access
Start verifying your AI outputs today.
API access requires a brief onboarding call so we can configure the right domain modules for your use case. Most teams are in production within a week.