Skip to content

Research Notes

A structured reading list and technical notes from MLNavigator. If you’re new here, start with the short list below.

Start here

Recommended reading order for understanding adapterOS, offline deployment, and auditable execution.

Feb 2026

MLNavigator and adapterOS

Company and product overview: market need, product scope, validation status, and business model.

companyadapterOSgovernancebusiness model
Jan 2026

Compliance Roadmap

Target frameworks and current status for CMMC, AS9100, and ITAR compliance.

complianceCMMCAS9100ITAR
Jan 2026

Execution Receipts and the Problem of Citable AI

AI citations are failing because they lack verifiable provenance. Execution receipts offer a path toward AI outputs that can be meaningfully cited.

provenancecitationsreproducibilityreceiptsacademic integrity

Blog: Industry insights and roadmap

Industry context, compliance positioning, and deployment planning.

Field observations informing this work:

  • "Multiple departments handle compliance, and teams adhere once rules are set." — Program manager, enterprise integrator
  • "Unified memory changes on-device AI efficiency." — Senior software engineer, platform ecosystem
  • "More members join the AI division every month." — Meetings director, defense association

Verification Scope

What adapterOS verification covers, what it does not cover, and where human oversight applies.

verificationreceiptsscopecompliance

Why Stop Conditions Belong in the Receipt

How generation ends is as important as how it begins. Recording the termination reason in the cryptographic receipt makes stop conditions an auditable decision.

stop conditionsreceiptsauditinferencedeterminismcompliancegeneration

Nondeterminism as a Governance Failure

A technical governance study examining how nondeterminism in AI systems creates audit, compliance, and operational control failures across regulated industries.

governancecomplianceauditdeterminismrisk management

Security and isolation

When to Reuse the KV Cache Safely with Adapters

KV-cache reuse is one of the largest inference speedups available, but adapters change the attention weights that produced the cache. A per-layer state hash turns this from a gamble into a verifiable policy.

kv-cacheadaptersLoRAinferenceperformancedeterminismcachingverificationmulti-tenant

Sealed Adapters and the Geometry of Multi-Tenant Inference

Running multiple tenants on shared inference hardware requires more than access control. Sealed adapter containers and cryptographic isolation make tenant boundaries provable.

adaptersmulti-tenantisolationsecurityITARinferencecryptographysealed

Threat Model: Offline-by-Default

What changes when you remove the network. Risks eliminated and risks amplified by offline-first architecture.

securitythreat-modelofflinearchitecture

Blog: Tech deep-dives

Detailed engineering notes on repeatable runs, offline operation, receipts, and artifact formats.

Cache Credits: Cryptographic Proof You Were Not Overcharged

When an inference provider reuses cached computation, the customer should pay less. Verifiable cache credits make this cryptographically provable, moving beyond contractual assurance.

cachingtokensbillingreceiptsverificationmeteringauditenterprise

Unified Memory Options

Apple M3 Ultra vs NVIDIA Grace-Blackwell on CPU-GPU data movement and the business tradeoffs of unified memory.

hardwarememoryinference

Kernel Allow-Lists: Determinism as Configurable Policy

Most systems treat determinism as a binary switch. Configurable kernel allow-lists let you choose which operations must be reproducible and which can trade reproducibility for speed.

determinismGPUkernelspolicyinferenceperformanceconfiguration

Q15 Fixed-Point Quantization as a Determinism Boundary

Floating-point computation does not need to be deterministic if you quantize before committing. The commit boundary is where reproducibility actually lives.

determinismquantizationfixed-pointQ15inferencereproducibilityrouting

Subscribe to updates via RSS

RSS Feed