Skip to content

Quick Start Guide

Starter Agent

Below is a starter agent for different agent frameworks that showcases the OpenGradient agent stack.

TIP

You can find our agent starter kit repo on GitHub.

python
from langgraph.prebuilt import create_react_agent
import opengradient as og

# Initialize OpenGradient client
og.init(private_key=OG_PRIVATE_KEY)

# Create OpenGradient LLM
opengradient_llm = og.langchain_adapter(
    private_key=OG_PRIVATE_KEY,
    execution_mode=og.LlmExecutionMode.TEE,
    model_cid='meta-llama/Llama-3.1-70B-Instruct')

# Initialize tools
tools = get_agent_tools()

# Create agent
opengradient_agent = create_react_agent(opengradient_llm, tools)

# Execute agent
events = opengradient_agent.stream(
    {"messages": [("user", "hello")]},
    stream_mode="values"
)
python
from swarm import Agent, Swarm
import opengradient as og

# Initialize OpenGradient client
og.init(private_key=OG_PRIVATE_KEY)

# Create OpenAI compatible client
opengradient_client = og.openai_adapter(private_key=OG_PRIVATE_KEY)

# Create Swarm client
swarm_client = Swarm(client=opengradient_client)

# Create agent
opengradient_agent = Agent(
    name="Trader agent",
    instructions="your job is to...",
    functions=[get_agent_tools()],
    model='meta-llama/Llama-3.1-70B-Instruct',
)

# Execute agent
response = swarm_client.run(
    agent=opengradient_agent,
    messages=[{
        "role": "user", 
        "content": "hello,
    }],
)

To get started right now with building your own agent, check out our agent starter kit on GitHub.

Choosing different LLM models

In the starter agent example, we used meta-llama/Llama-3.1-70B-Instruct for agentic reasoning. You can also pick any of the LLMs supported by the OpenGradient network. The full list can be found here. To switch to a different LLM simply replace the model_cid.

For example, to use Mistral's 7B model using LangChain:

python
import opengradient as og

opengradient_llm = og.langchain_adapter(
    private_key=OG_PRIVATE_KEY,
    execution_mode=og.LlmExecutionMode.TEE,
    model_cid="mistralai/Mistral-7B-Instruct-v0.3")

Refer to our API docs for further information.

Adding AlphaSense ML tools

AlphaSense models and tools are exposed as regular agent tools (also known as functions) that can be plugged into any existing agent. Any custom model from our Model Hub can be utilized by agents through AlphaSense.

A tool has the following properties:

  • name
  • description
  • input_description

The LLM uses these to decide when and how to utilize the given tool. AlphaSense allows you to expose custom ML models as tools.

For example, creating a tool for a custom spot forecasting model from Hub is as simple as:

python
import opengradient as og

spot_forecasting_model_id = 'QmY1RjD3s4XPbSeKi5TqMwbxegumenZ49t2q7TrK7Xdga4'

# Create spot forecasting tool for SUI/USD
spot_forecast_tool = og.mltools.create_og_model_tool(
    tool_type=og.mltools.ToolType.LANGCHAIN,
    model_cid=spot_forecasting_model_id,
    tool_name="SuiSpotForecast",
    input_getter=lambda: {"open_high_low_close": fetch_sui_price_history()},
    output_formatter=lambda out: f"The predicted price change is: {out['destandardized_prediction'][0] * 100}%",
    tool_description="Runs an ML model to forecast the price of SUI 30 minutes from now. Requires no input."
)

# agent can use spot forecasting tool as part of its execution
agent = create_react_agent(opengradient_llm, [spot_forecast_tool])

The model input (in this case price history) would be fetched from any CEX/DEX API via the fetch_sui_price_history function. This input_getter function is the responsibility of the developer to implement.

You can replace the model cid with any model from the OpenGradient Model Hub. You can also create multiple tools with different models.

TIP

You can also upload your own model to the Hub and make it accessible to agents.

Setting up Data Access

Currently in Private Beta. Please contact us if you are interested.

OpenGradient 2025