Research Pillars
Our research focuses on four interconnected areas. Each pillar has a working definition, early artifacts, and measurements in progress.
Verifiable Inference
Definition
Ability to provide cryptographic evidence that a specific model processed specific inputs to produce specific outputs.
Why It Matters
Audit trails need evidence of what ran. Compliance needs repeatable process. Trust needs proof.
What We're Documenting
Draft hash-chain specs, manifest formats, and signing protocols.
What We Measure
Early verification overhead, artifact size, chain validation latency.
Anti-Collection ML
Definition
Machine learning systems designed to minimize data egress by default.
Why It Matters
Third-party dependencies introduce collection risk. Offline-first systems reduce exposure.
What We're Documenting
Dependency audits, telemetry-free runtime patterns, network isolation checklists.
What We Measure
Outbound connection attempts and data egress risk surface.
Deterministic Tuning
Definition
Reproducible adapter creation where identical inputs aim to produce consistent outputs across runs.
Why It Matters
Regulatory workflows need repeatable process. Debugging needs determinism.
What We're Documenting
Seed management notes, version pinning strategies, tolerance specs.
What We Measure
Run-to-run variance and cross-platform drift.
Efficiency on Apple Silicon
Definition
Optimizing inference performance per watt on unified memory architectures.
Why It Matters
Data movement dominates energy cost. Unified memory reduces transfers.
What We're Documenting
Draft Joules/token methodology, memory bandwidth analysis, thermal notes.
What We Measure
Joules per token, tokens per second per watt, thermal throttling frequency.
Interested in collaboration?
We welcome collaboration with research institutions, defense contractors, and organizations that need verifiable AI.
Talk to a human