Trust & Verification

Draft specifications for how we approach verifiability. Threat models, artifact formats, determinism requirements, energy measurement, and our data collection policy.

Note: This page reflects design goals and draft specifications. Implementation is in progress and may change.

Threat Model

What We Protect Against

  • Unauthorized data exfiltration during inference
  • Undetected modification of model weights or adapters
  • Configuration tampering without audit trail
  • Non-reproducible inference runs
  • Log deletion or modification

What We Do Not Claim

  • AI outputs are "true" or "correct"
  • Protection against hardware-level attacks
  • Protection against malicious operators with root access
  • Model safety or alignment guarantees
  • Perfect numerical reproducibility across architectures

Artifact Chain

The draft spec defines a proof pack with cryptographic hashes linking inputs to outputs.

Artifact chain diagram showing how inputs, model, adapter, config, and runtime hashes combine into a signed proof pack

Proof Pack Structure

{
  "version": "1.0.0",
  "timestamp": "2025-11-12T14:32:01Z",
  "input_hash": "sha256:a7b9c3d4e5f6...",
  "model_hash": "sha256:1a2b3c4d5e6f...",
  "adapter_hash": "sha256:9f8e7d6c5b4a...",
  "runtime_hash": "sha256:0123456789ab...",
  "config_hash": "sha256:fedcba987654...",
  "output_hash": "sha256:abcdef012345...",
  "signature": {
    "algorithm": "Ed25519",
    "public_key": "MCowBQYDK2VwAyEA...",
    "value": "base64:signature_bytes..."
  },
  "previous_hash": "sha256:chain_link..."
}

Determinism Requirements

Fixed Seeds

Random number generators are intended to use declared seeds. Seeds are recorded in the proof pack.

Pinned Versions

Runtime, model, and adapter versions are intended to be pinned and hashed. No floating dependencies.

Declared Tolerances

Numerical tolerances are documented. Expected variance is specified per operation.

Note: Bit-exact reproducibility is not guaranteed across different hardware. We specify tolerances and document expected variance for cross-platform deployments.

Energy Measurement Method

Joules per token provides a hardware-normalized efficiency metric. Our methodology is designed for repeatability on macOS.

Measurement Steps

  1. Thermal stabilization: 5 minutes idle before measurement
  2. Baseline capture: 30 seconds of idle power via powermetrics
  3. Workload execution: Run inference with fixed input batch
  4. Power sampling: 100ms intervals during inference
  5. Calculation: (Total energy - Baseline energy) / Token count
  6. Averaging: Report mean of 10 runs with standard deviation

Required Reporting

  • • Hardware model and chip variant
  • • macOS version
  • • Ambient temperature range
  • • Power adapter state

Repeatability Rules

  • • Same hardware for comparison
  • • Same thermal conditions
  • • Same input data
  • • <5% variance between runs

No Third-Party Collection

Explicit Statement: This website does not include third-party tracking scripts, analytics pixels, or telemetry that reports to external services. AdapterOS is designed without them.

We do not use Google Analytics, Facebook Pixel, Mixpanel, Segment, or similar services.

Allowed Minimal Logs

Minimal server access logs for security and debugging. IP addresses, timestamps, request paths. Typically retained for 30 days. Not shared with third parties.

Opt-In Only

Contact form submissions are stored to respond to inquiries. Newsletter signup is explicit opt-in only. No pre-checked boxes.