Every computation you run could be cryptographically provable. Every Databricks job, every Snowflake query - all backed by tamper-evident attestation that auditors actually trust.
Runs on AWS • Azure • GCP • Databricks • Snowflake
How do you prove your ML model actually trained on the data you claim? Now you can.
Compliance teams love cryptographic proof. No more explaining why your results are valid.
Research should be verifiable. This makes your computational work actually reproducible.
Three steps. Takes about two minutes to set up. Works with whatever you're already doing.
Works with your existing setup
Just a pip install. No config files, no setup scripts, no headaches.
pip install anansi-compute
One line change
Your code stays the same. Just wrap it and get cryptographic proof of execution.
import anansi
result = anansi.compute(your_function, data, proof=True)
Tamper-evident results
Standard cryptographic proof that anyone can verify. No trust required.
{
"result": your_computed_result,
"proof": "0x7f8a9b2c3d4e5f6a...",
"attestation": "verified",
"timestamp": "2025-01-20T23:33:38Z"
}
Every computation generates a cryptographic proof bundle. Here's what auditors see.
Who ran it, what code, which data inputs
Cloud, region, instance type, timing
CPU and GPU hardware verification
Tamper-evident seal over the entire bundle
{
"version": "1.0",
"proof_id": "prf_2025-08-21_7c2a",
"job": {
"job_id": "dbx-13482",
"caller": "analyst@company.com",
"entrypoint": "s3://bucket/ml-training:sha256:3f8f...e21"
},
"route": {
"cloud": "azure",
"region": "eastus",
"instance": "NCCads_H100_v5"
},
"timing": {
"started_at": "2025-08-21T01:03:12Z",
"ended_at": "2025-08-21T01:07:55Z"
},
"attestation": {
"cpu": {
"type": "SEV-SNP",
"verified": true
},
"gpu": {
"type": "NRAS",
"verified": true
}
},
"signature": {
"alg": "EdDSA",
"verified": true
}
}
It's going open source soon. If you find bugs or have ideas, let me know. Always looking for feedback from people actually using this stuff.