Skip to content

Storage Node

OpenGradient relies on a customized decentralized filestore powered by our storage nodes for hosting and downloading models. Users are free to upload and download any model on this filestore and make it immediately available for on-chain inference, resulting in a fully integrated and open infrastructure for on-chain AI.

Uploaded models are instantly available for inference on OpenGradient. When a new model is requested, the selected inference node swiftly downloads and caches a local copy, ensuring rapid access in the future. This efficient process is entirely abstracted and handled by OpenGradient's infrastructure, allowing users and developers to focus on their tasks without delay.

The order of events when using a new model is as follows:

  1. New model uploaded to filestore
  2. User requests model inference through EVM or SDK
  3. Inference node downloads and caches model locally
  4. Inference node executes the model with the given input
  5. Inference node returns output and inference proof

NOTE

To learn more about hosting your models on OpenGradient, go to Model Hub.

What about private models?

OpenGradient also supports hosting models privately through running a custom inference node, where the model is only cached locally on the node and not stored in our decentralized filestore.

OpenGradient 2024