Model Hub
TIP
Explore the OpenGradient Model Hub at hub.opengradient.ai
The OpenGradient Model Hub is a decentralized repository for AI models — a place to discover, share, version, and run models of every kind. From lightweight regression models to full-scale LLMs, diffusion models, and everything in between, the Hub gives every model a permanent home backed by Walrus decentralized storage.
Unlike centralized model registries, the Model Hub is permissionless: anyone can upload a model and make it available for inference on the OpenGradient network in seconds, with no gatekeepers and no approval queues.
Why the Model Hub?
Traditional model hosting is fragmented. Models are scattered across cloud buckets, GitHub repos, and proprietary platforms — each with its own access patterns, versioning schemes, and limitations. The Model Hub provides a single, decentralized alternative that combines:
- Permanent, decentralized storage — Models are stored on Walrus, so they can't be taken down, censored, or lost when a cloud provider changes its terms.
- Built-in versioning — Every model is organized into repositories with structured releases, so you can iterate without breaking consumers.
- Inference-ready by default — Models in ONNX format are immediately available for on-chain inference across the OpenGradient network, including Vanilla, ZKML, and LLM execution modes.
- Community and collaboration — Each model has its own discussion forum, tags, and social features to help the community learn and iterate together.
- Web3-native access — Sign in with your wallet or a traditional email account. Models are identified by content-addressed Blob IDs, making them composable with smart contracts and on-chain workflows.
What You Can Find
The Hub hosts models across a broad range of tasks and architectures:
| Category | Examples |
|---|---|
| LLMs | Chat models, completion models, instruction-tuned variants |
| DeFi & Finance | Volatility forecasters, dynamic fee models, risk scoring |
| Computer Vision | Image classifiers, object detection, stable diffusion |
| Time Series | Price prediction, anomaly detection, forecasting |
| General ML | Regression, classification, clustering, embeddings |
There are no restrictions on what you can upload — the Hub supports any model format for storage. For on-chain inference, models need to be in ONNX format.
Getting Started
There are two ways to interact with the Model Hub:
Web Frontend
The Hub website is the fastest way to browse, search, and test models. It provides:
- Full-text search with filters for tasks, tags, authors, and organizations
- A built-in Playground for running inference directly in your browser
- Model pages with descriptions, version history, files, and community discussions
- One-click uploads for new models and versions
The web UI is ideal for exploring what's available, trying out models, and managing your repositories visually.
Python SDK & CLI
The Python SDK and bundled CLI are designed for programmatic workflows — CI/CD pipelines, batch uploads, scripted version management, and integration into training pipelines. Install with:
pip install opengradientThen create, version, and upload models from your terminal or scripts:
import opengradient as og
client = og.Client(private_key="<private_key>", email="<email>", password="<password>")
# Create a new model repository
client.model_hub.create_model("my-model", model_desc="A volatility forecasting model")
# Upload a model file
client.model_hub.upload(model_path="model.onnx", model_name="my-model", version="1.00")See the SDK Model Management guide for full CLI and library documentation.
How Models Are Organized
The Hub uses a three-level hierarchy to keep models structured and iterable:
Model Repository
├── Release v1.00
│ ├── model.onnx
│ ├── README.md
│ └── config.json
├── Release v1.01
│ ├── model.onnx (updated)
│ └── README.md
└── Release v2.00
├── model_part1.onnx
├── model_part2.onnx
└── README.md- Model Repository — The top-level container. Holds metadata like the model's name, description, task category, license, and tags.
- Model Release — A specific version of the model (e.g.,
1.00,1.01,2.00). Each release is independently usable and can contain release notes describing what changed. - Model Files — The actual artifacts within a release. In simple cases this is a single ONNX file; for larger architectures like LLMs, a release can contain multiple files along with documentation and configs.
Learn more in Model Management.
Running Inference
Every ONNX model on the Hub is ready for inference on the OpenGradient network. There are multiple execution modes:
- Vanilla Inference — Standard model execution, fastest option
- ZKML Inference — Zero-knowledge proof-based execution for privacy-preserving, verifiable results
- LLM Inference — Chat and completion endpoints for large language models
You can test any model using the Playground on the Hub, or run inference programmatically through the Python SDK.
NOTE
To learn more about the decentralized storage layer that powers the Hub, see Storage.
