Inference Verification
OpenGradient offers a range of cryptographic and cryptoeconomic security schemes for securing inference on our network. We allow developers to choose the most suitable method for their use case, making the right tradeoff between speed, cost, and security.
NOTE
Models are executed on our permissionless and scalable inference nodes and are verified and secured in a distributed fashion by all validators on the OpenGradient Network. Read more about it in OpenGradient Architecture.
We currently offer the following security methods:
- ZKML (Zero-Knowledge Machine-Learning)
- TEE (Trusted Execution Environments)
- ZK-CRV (ZKML + Challenge-Response Validation)
- Vanilla Inference
Developers can pick the most suitable method for their application and use case. Below, we compiled a table of tradeoffs and suggested use cases:
Method | Overhead | Security | Recommendation |
---|---|---|---|
ZKML | 100-1000x slower | Instantly verified using cryptographic proof | Best for smaller models serving high-impact use-cases |
TEE | 2-3x slower | Instantly verified using attestation | Best for medium to large models |
ZK-CRV | 1-2x slower | Proof of stake with challenge window | Best for use-cases that are more latency and cost-sensitive |
Vanilla | No overhead | No verification | Best for Gen AI or other large models |
To configure and select the verification method for your inference, refer to our NeuroML and Python SDK documentation.
TIP
Even within the same transaction, users can pick different security modes for different inferences, e.g., TEE for an LLM request and ZKML for a classical ML model.
The decision to select the right verification method is very important. It should be carefully evaluated by the application developers, considering the risks and requirements of the use case.