
In decentralized compute, trust is the bottleneck. Users can't just assume their GPU jobs ran correctly — not when payouts and critical applications depend on correctness. But verifying compute is hard. Full zero-knowledge proofs of complex workloads like AI training or rendering are still years away.
That's why BitSage uses a hybrid approach: blending cryptographic proofs, deterministic recomputation, hardware attestation, and sampling to provide verifiable compute now, across a range of job types.
Let's break it down.
🧩 What Are We Trying to Prove?
At BitSage, we don't aim to prove every instruction. Instead, we prove:
- That the job was actually executed as submitted
- That the claimed resources (GPU-hours, memory, I/O) were consumed
- That the result matches the declared input/output
- That the execution came from a known node in a verified environment
This is Proof of Compute, not full ZK computation.
🛠 Hybrid Verification Model
1. ZK Proofs Where They Make Sense
Jobs like STARK or SNARK generation already produce verifiable proofs — we just route and record them. For these, the proof is the verification.
→ Zero overhead, perfect trust
2. Deterministic Spot Recompute
For jobs like rendering, video encoding, or procedural content generation, output is deterministic. We recompute k% of the result (e.g. 5% of frames or tiles) and verify it matches the original hash.
→ 2–5% overhead, high trust
3. TEE Attestation + Hash Commitments
For AI inference or scientific simulation, we verify:
- TEE environment signature (H100 CPR or AMD SEV-SNP)
- Resource usage logs (time, VRAM, CPU)
- Input/output hashes
- Optional: model checksum and pinned version
→ <1% overhead, tamper-evident logs
4. Random Spot-Check Sampling
For jobs where full re-run isn't feasible (e.g., LLM inference), we use:
- Random prefix/token replay (verify partial outputs)
- Committed weights/seeds + challenge period
- Optional: slashing for false claims
→ 3–7% overhead, tunable based on risk
5. Merkle/KZG Commitment Logs
All job inputs, model weights, outputs, and intermediate states (if needed) are committed to Merkle or KZG trees. These let us verify consistency without storing full job data on-chain.
→ Auditability without storage bloat
🎯 Per-Job Verification Strategies
| Job Type | Verification Strategy |
|---|---|
| ZK Proof Gen | Native proof |
| 3D Rendering | k% tile re-render + output hash |
| Video Transcode | Segment replay + watermark |
| AI Inference | TEE + spot-check + hash commitments |
| Scientific Sim | Seed replay + invariants/residual checks |
| AI Training (small) | Checkpoint hash + TEE logs |
| AI Training (large) | Not supported without colocated clusters (for now) |
🔄 Finality: When Is a Job "Done"?
Once a job is completed:
- Output hash + proof data is submitted to Starknet
- The client has a dispute window (based on verification tier)
- If no challenges, the node is paid and proof is final
Jobs with higher value or sensitivity can opt into stricter verification tiers (e.g., full replay or multi-node redundancy).
🛡 Why This Matters
In traditional cloud compute, trust is implicit. In BitSage, it's cryptographically enforced.
This unlocks:
- Decentralized payouts based on provable work
- Auditable compute for legal, scientific, or financial applications
- Anti-censorship execution from any region
- Composable, permissionless compute layers for smart contracts
🧭 Where We're Headed
As ZK-proving systems (e.g., zkML, SP1, RISC Zero) mature, we'll expand proof coverage. But today, hybrid verification gets us 80% of the way there — across rendering, inference, ZK workloads, and scientific jobs — without waiting for perfect zero-knowledge.
BitSage makes verifiable compute real. Not someday. Now.

