securitydatacompliancedraft
2026-01-08Proprietary Data Exposure in AI Systems
Working draft outlining how proprietary data can leak through AI systems, with mitigation ideas for enterprise use.
Read note →Working papers and methodology notes from MLNavigator. These are early lab materials and may change as the work matures.
Working draft outlining how proprietary data can leak through AI systems, with mitigation ideas for enterprise use.
Read note →Draft note on a hash-chain approach for verifying inference runs in air-gapped environments. We describe a proposed artifact format and signing protocol, with early prototype measurements in progress.
Read note →Draft methodology note outlining how we measure inference energy on Apple silicon using macOS powermetrics. The protocol is intended for repeatable internal benchmarking; tooling and results are still in development.
Read note →