Verifiable, offline AI systems.

We focus on traceable model runs and offline deployment.
Clear provenance, documented configs, and repeatable setups.

Design goals

Offline by design

Designed to run without outbound network calls or telemetry.

Repeatable runs

We aim for reproducible setups with documented tolerances and signed artifacts.

Data minimization

We design for minimal collection and keep evidence local where possible.

Verifiable is not truth

We do not promise the model is right. We aim to show what model ran, with what config, on what input. That's the part you can verify.

Provenance

Artifacts should trace back to their origin. Model weights, adapters, and runtime are identified where possible.

Manifests

Structured declarations of what should run. Machine-readable. Diffable.

Signed Configs

Configurations can be signed so tampering is detectable.

Hash-Chained Logs

Log entries can reference the previous; deletion or modification becomes detectable.

Energy is a constraint

In transfer-heavy workloads, data movement dominates energy cost. Unified memory architectures can reduce this cost by eliminating copies between CPU and GPU memory. We measure this with Joules per token.

Methodology

We document a measurement methodology for Joules/token benchmarking on Apple silicon.

macOS powermetrics API • 10-run averaging • thermal normalization • documented tolerances

Recent Research Notes

View all →
Jan 2026

Verifiability Is Not Truth

Verification proves what happened, not that the output is correct. This distinction matters for compliance and trust.

Stay informed

Get notified when we publish new research or open access to our tools.

No spam, ever. We only email when we have something worth sharing.