Skip to content

Model Scheduler

The Model Scheduler enables you to schedule and automate recurring model inference tasks on the OpenGradient blockchain. Instead of manually triggering model runs, you can register your contract once and have it execute automatically on a schedule you define.

Why Use Model Scheduler?

Model Scheduler solves the problem of running periodic, automated model inference tasks. Common use cases include:

  • Dynamic AMM Fee Models: Automatically update fee parameters based on market volatility predictions
  • Risk Assessment: Periodically recalculate lending pool risk scores using ML models
  • Price Prediction Services: Run price prediction models on a schedule and expose results via your contract
  • On-Chain Agents: Execute autonomous agents that make decisions based on model outputs
  • Market Analysis: Automatically analyze market conditions and update trading strategies

How It Works

From a developer's perspective, Model Scheduler is simple:

  1. Deploy your contract that implements the IModelExecutor interface
  2. Register your contract with the scheduler, specifying how often it should run
  3. Your contract's run() function is automatically called on the schedule you defined
  4. Retrieve results anytime by calling getInferenceResult() on your contract

The system handles all the complexity of scheduling, transaction execution, and reliability—you just focus on your model logic.

Implementing Your Contract

To use Model Scheduler, your contract must implement the IModelExecutor interface. This interface requires two functions:

Required Functions

  1. run(): This function is called automatically on your defined schedule. Implement your model inference logic here.
  2. getInferenceResult(): Returns the latest ModelOutput result from your model execution.

Implementation Steps

  1. Import the interface: Add import "./IModelExecutor.sol"; to your contract
  2. Implement the interface: Make your contract implement IModelExecutor
  3. Define your run() function: Add your model inference logic (using OGInference, OGHistorical, etc.)
  4. Store results: Save the ModelOutput in a state variable
  5. Implement getInferenceResult(): Return the stored result

Example: Volatility Prediction Contract

Here's a complete example of a contract that predicts ETH/USD volatility and can be scheduled to run automatically:

solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.18;

import "./IModelExecutor.sol";
import "x/evm/contracts/og_inference/OGInference.sol";
import "x/evm/contracts/historical/OGHistorical.sol";

/**
 * @title ModelExecutorVolatility
 * @dev Implementation of IModelExecutor to predict ETH/USD volatility using OpenGradient's model.
 */
contract ModelExecutorVolatility is IModelExecutor {
    event InferenceResultEmitted(address indexed caller, ModelOutput result);

    /// @dev Stores the result of the last model inference
    ModelOutput private inferenceResult;
    OGHistorical public historicalContract;

    constructor() {
        address historicalContractAddress = 0x00000000000000000000000000000000000000F5;
        historicalContract = OGHistorical(historicalContractAddress);
    }

    /**
     * @dev Executes the volatility model task.
     */
    function run() public override {
        CandleType[] memory candles = new CandleType[](4);
        candles[0] = CandleType.Open;
        candles[1] = CandleType.High;
        candles[2] = CandleType.Close;
        candles[3] = CandleType.Low;

        HistoricalInputQuery memory input_query = HistoricalInputQuery({
            currency_pair: "ETH/USD",
            total_candles: 10,
            candle_duration_in_mins: 30,
            order: CandleOrder.Ascending,
            candle_types: candles
        });

        inferenceResult = historicalContract.runInferenceOnPriceFeed(
            "QmRhcpDXfYCKsimTmJYrAVM4Bbvck59Zb2onj3MHv9Kw5N",
            "open_high_low_close",
            input_query
        );

        emit InferenceResultEmitted(msg.sender, inferenceResult);
    }

    /**
     * @dev Retrieves the result of the last executed task.
     * @return The stored `ModelOutput`.
     */
    function getInferenceResult() public view override returns (ModelOutput memory) {
        return inferenceResult;
    }
}

Developer Workflow

The following diagram shows the complete workflow from development to deployment:

Use Cases

Dynamic Fee AMM

Create an AMM that adjusts fees based on predicted market volatility:

solidity
function run() public override {
    // Get historical price data
    HistoricalInputQuery memory query = ...;
    
    // Run volatility model
    ModelOutput memory output = historicalContract.runInferenceOnPriceFeed(
        volatilityModelCID,
        "volatility_prediction",
        query
    );
    
    // Update AMM fee based on prediction
    uint256 newFee = extractFeeFromOutput(output);
    amm.setFee(newFee);
    
    inferenceResult = output;
}

Automated Risk Scoring

Periodically update risk scores for a lending pool:

solidity
function run() public override {
    // Collect borrower data
    BorrowerData[] memory borrowers = getActiveBorrowers();
    
    // Run risk assessment model
    ModelOutput memory riskScores = OGInference.runModelInference(
        ModelInferenceRequest(
            ModelInferenceMode.ZK,
            riskModelCID,
            encodeBorrowerData(borrowers)
        )
    );
    
    // Update risk scores in pool
    updateBorrowerRiskScores(riskScores);
    
    inferenceResult = riskScores;
}

Scheduling Your Contract

Once your contract is deployed, register it with the Model Scheduler to enable automated execution. You can specify:

  • Execution frequency: How often run() should be called (e.g., every 5 minutes, hourly, daily)
  • Start time: When the scheduled execution should begin
  • Stop time: When the scheduled execution should end (optional)

The Model Scheduler handles all transaction execution automatically—you don't need to manage wallets, nonces, or retries.

TIP

You can also use the Python SDK to register and manage scheduled tasks programmatically, making it easy to integrate into your deployment workflows.