Skip to content

Core Features

Browsing & Discovery

The Hub's model explorer is designed for fast, flexible discovery across the entire model catalog.

Search — Find models by name, author, or content identifier using the search bar. Advanced search qualifiers let you narrow results further:

  • name:volatility — search within model names
  • author:OpenGradient — filter by publisher
  • cid:Qm... — look up a model by its content identifier

Filter by task — Narrow results to a specific problem domain: LLMs, risk models, DeFi forecasting, image generation, protocol optimization, multimodal models, and more.

Filter by tags — Community and author-applied tags make it easy to find models for specific use cases, frameworks, or datasets.

Filter by organization — Browse models published by a specific team or organization.

Sort — Order results by relevance, recency, popularity (likes), or download count to surface the models that matter most.

Model Pages

Every model has a dedicated page that serves as its home on the Hub. A model page includes:

  • About — A rich description with full Markdown support (including LaTeX math and GFM tables). Authors can document their model's purpose, architecture, training details, and usage instructions.
  • Files & Versions — Browse every release and its files. Download individual files, view file metadata, and copy Blob IDs for use in inference calls.
  • Playground — Run the model interactively (see below).
  • Metadata — Task category, license, tags, download stats, and like count.

Model authors also get access to a settings panel where they can update descriptions, manage tags, control visibility (public or private), rename the model, or delete it.

Playground

The Playground is an interactive inference sandbox built into every model page. It lets you run a model directly in your browser — no SDK setup, no code required.

For ONNX models, the Playground automatically reads the model's input and output schema and generates a form with the correct field names and types pre-populated. Model authors can customize the default input values and add descriptions to make it easier for visitors to test the model with realistic data.

For LLM models, the Playground provides a chat interface where you can send messages and see responses in real time.

The Playground calls inference on the OpenGradient network, so results are identical to what you'd get through the SDK or a smart contract — including the blockchain transaction hash that records the execution.

TIP

Try the Playground on any model — for example: OpenGradient Volatility Model Playground

Versioning

Models evolve. The Hub's versioning system is designed to make iteration seamless without breaking existing consumers.

Each model repository supports semantic versioning with major and minor releases:

  • Minor versions (e.g., 1.001.01) — incremental improvements, retraining runs, or small fixes
  • Major versions (e.g., 1.012.00) — architectural changes, breaking input/output changes, or significant upgrades

Every version includes:

  • Its own set of files (model artifacts, documentation, configs)
  • Optional release notes describing what changed
  • Independent Blob IDs, so consumers referencing a specific version are never affected by newer uploads

You can create and manage versions through the web UI or the Python SDK.

Organizations

Organizations let teams publish and manage models under a shared identity. An organization has its own profile page, member list, and model catalog — making it easy for teams, companies, or research groups to establish a recognizable presence on the Hub.

Models published under an organization appear with the organization name as the author, and visitors can browse all of an organization's models from a single page.

For Developers

Using Models in Your Code

Every model on the Hub has a Blob ID — a content-addressed identifier that points to the model's files on decentralized storage. To use a model for inference, you reference it by this Blob ID.

The model's "About" page shows ready-to-use code snippets for both the CLI and the Python SDK. A typical workflow looks like:

python
import opengradient as og

client = og.Client(private_key="<key>", email="<email>", password="<password>")

result = client.inference.infer(
    model_cid="<blob_id>",
    model_input={"input1": [1.0, 2.0, 3.0]},
    inference_mode=og.InferenceMode.VANILLA
)

print(result.model_output)
print(result.transaction_hash)  # on-chain record of this execution

Markdown Support

Model descriptions support most GitHub Flavored Markdown syntax, LaTeX for mathematical notation, and captions for images, tables, and blockquotes.

Captions

Prefix the caption line with Caption: immediately after the element:

markdown
![Architecture diagram](https://example.com/diagram.png)
Caption: Overview of the model architecture
markdown
| Metric | Value |
| ------ | ----- |
| Accuracy | 0.94 |
| F1 Score | 0.91 |

Caption: Benchmark results on the test set

Note: for tables, leave a blank line between the table and the Caption: line.