Storage Node
OpenGradient relies on a customized decentralized filestore powered by our storage nodes for hosting and downloading models. Users are free to upload and download any model on this filestore and make it immediately available for on-chain inference, resulting in a fully integrated and open infrastructure for on-chain AI.
Uploaded models are instantly available for inference on OpenGradient. When a new model is requested, the selected inference node swiftly downloads and caches a local copy, ensuring rapid access in the future. This efficient process is entirely abstracted and handled by OpenGradient's infrastructure, allowing users and developers to focus on their tasks without delay.
The order of events when using a new model is as follows:
- New model uploaded to filestore
- User requests model inference through EVM or SDK
- Inference node downloads and caches model locally
- Inference node executes the model with the given input
- Inference node returns output and inference proof
NOTE
To learn more about hosting your models on OpenGradient, go to Model Hub.
What about private models?
OpenGradient also supports hosting models privately through running a custom inference node, where the model is only cached locally on the node and not stored in our decentralized filestore.