Storage
OpenGradient uses Walrus for decentralized storage. Walrus provides the storage layer for AI models and large inference proofs, keeping these assets accessible while keeping the blockchain efficient.
How It Works
Walrus stores data as blobs, each identified by a unique Blob ID. OpenGradient uses these Blob IDs to reference:
- AI Models: Model files uploaded to the Model Hub are stored on Walrus and retrieved by inference nodes when needed
- Large Proofs: ZKML and other large inference proofs are stored on Walrus, with only the Blob ID recorded on-chain
This separation keeps the blockchain lean—storing only references—while maintaining full data availability and verifiability.
Model Storage
When a model is uploaded to the Model Hub, it's stored on Walrus and assigned a Blob ID. Inference nodes download and cache models locally as needed:
- Model uploaded to Walrus → Blob ID assigned
- User requests inference for the model
- Inference node downloads model using Blob ID (if not cached)
- Model cached locally for future requests
NOTE
To learn more about hosting models, see the Model Hub.
Proof Storage
Large inference proofs are also stored on Walrus to avoid blockchain bloat:
- On-chain: Blob ID reference and verification status
- Walrus: Full proof data
This allows the network to scale without state bloat while ensuring all proofs remain accessible and verifiable.
