MLNavigator and adapterOS
MLNavigator is the company. adapterOS is the offline inference runtime. Here is what each does, what we can show, and what is still in progress.
MLNavigator builds adapterOS: a deterministic multi-LoRA runtime for regulated, offline deployments.
Each run produces cryptographic receipts teams can verify during audit and incident review.
Built for constrained environments: deterministic policy, signed artifacts, and offline-first operation.
→ See our Compliance Roadmap for how we meet CMMC/AS9100 requirements.
→ See: MLNavigator and adapterOS (clean mental model).
Validation signals include NSF I-Corps participation, a $25k grant, and 50+ customer discovery conversations.
Designed to run without outbound network calls, license checks, or telemetry.
Receipts, manifests, and signed artifacts link outputs to inputs and configuration so audits don’t depend on screenshots or “trust us.”
Built for audit surfaces shaped by CMMC 2.0 Level 2 (a common requirement for DoD suppliers), AS9100 (aerospace quality), ITAR (export-controlled technical data), and FAA documentation workflows.
Non-confidential schematic
A typical run looks like: upload a drawing or document package, run offline checks, review flagged issues, then export an audit-ready proof pack (receipts, configuration, and hashes).
Leadership function
Owns deployment scoping, buyer workflow fit, and operational rollout for regulated programs.
Leadership function
Owns deterministic runtime design, receipt integrity, and hardware-aware execution constraints.
Regulated operators face audit exposure when AI execution cannot be reproduced or explained. MLNavigator focuses on runtime infrastructure that makes execution traceable, repeatable, and reviewable.
An AI platform for regulated industries where cloud AI cannot go: local-first, verifiable, and designed for high-assurance environments.
We do not promise the model is right. We aim to show what ran, with what configuration, against what input. That is what you can verify in an audit.
Artifacts should trace back to their origin. Model weights, adapters, and runtime are identified where possible.
Structured declarations of what should run. Machine-readable. Diffable.
Configurations can be signed so tampering is detectable.
Log entries can reference the previous; deletion or modification becomes detectable.
In transfer-heavy workloads, data movement dominates energy cost. Unified memory architectures can reduce this cost by eliminating copies between CPU and GPU memory. We measure this with Joules per token.
We document a measurement methodology for Joules/token benchmarking on Apple silicon.
macOS powermetrics API • 10-run averaging • thermal normalization • documented tolerances
MLNavigator is the company. adapterOS is the offline inference runtime. Here is what each does, what we can show, and what is still in progress.
What adapterOS verification covers, what it does not cover, and where human oversight applies.
KV-cache reuse is one of the largest inference speedups available, but adapters change the attention weights that produced the cache. A per-layer state hash turns this from a gamble into a verifiable policy.
Get notified when we publish new research or open access to our tools.
No spam, ever. We only email when we have something worth sharing.