Secure LLM Inference and Reasoning
Overview
OpenGradient provides verifiable LLM inference is through the Python and TypeScript SDKs. These SDKs directly interact with the OpenGradient blockchain to run secure and verified LLM inferences. LLM inferences on OpenGradient are secured through the use of TEEs and cryptoeconomic security. This on-chain inference allows full transparency, traceabiltiy and security for agents' execution, allowing validators and users to track every single prompt, input and context that goes into the agent's reasoning process as well as its generated output and actions.
NOTE
You can learn more about how OpenGradient's verifiable inference works and the architecture that enables it here.
This allows everyone to gain unparalleled insights and track and verify the correctness of agents. By using our verifiable LLMs, you also get instant integration and access to our Agent Explorer that lets you and your users gain visibility and full traceability.
At a high level, every agent execution looks like the following:
def run_agent(agent, llm, tools, user_input):
msgs = [user_input]
while True:
llm_prompt = format_prompt(agent.system_prompt, tools, msgs)
next_step = llm.execute(llm_prompt)
if next_step.isDone():
# Agent is done, return final answer
return next_step.answer
else:
# Otherwise, execute the selected action
tool = tools[next_step.tool_name]
tool_output = tool.execute(next_step.tool_input)
msgs.append(tool_output)
As we can see, everything the agent does is driven by the underlying LLM's reasoning output, which is a product of the agent's system_prompt
, tools
and the provided user_input
for the execution. The LLM takes all of these variables and returns appropriate action to take (whether it's transferring some tokens, or posting a tweet) which the agent framework executes.
By having verifiable LLM infernce through OpenGradient persisted on our immutable blockchain ledger, the entire agentic reasoning and decisions become fully transparent and traceable, where anyone can see what inputs and instructions the agent received, and whether the agent acted honestly. This essentially makes agents verifiable and trustless, opening up the possibility of much more advanced use-cases and responsibilities.
Benefits
- The Agent’s input, reasoning, tool usage and output becomes fully traceable and verifiable on our network. This eliminates the need to trust centralized agent and LLM operators, and opens up door for high-value use-cases (eg DeFi or trading).
- Every agent execution leaves a immutable trace on the OpenGradient blockchain, becomes easily verifiable and traceable by anyone
- No need to trust centralized agent owners and operators
- Every LLM inference is secured and verified by the OpenGradient network validators, using TEE and cryptoeconomic security
- Permissionless access to best-in-class LLMs with tool calling and reasoning capabilities (eg
Meta-Llama-3-70B
andQwen2-72B
) - Out of the box integration with our Agent Explorer, providing unparalleled visibility and traceability
- Compatible with the most popular agent frameworks, requires no modification or rewrite
- Option to run inferences on H100 GPUs with Trusted Execution Environment (TEE)
Usage
OpenGradient's verifiable LLMs can be used as a replacement for centralized LLM providers such as OpenAI or Claude. We provide direct integrations for popular agent frameworks such as LangChain. You can simply plug our LLMs into these frameworks and get verifiability instantly.
from langgraph.prebuilt import create_react_agent
import opengradient as og
og.init(private_key=OG_PRIVATE_KEY)
opengradient_llm = og.langchain_adapter(
private_key=OG_PRIVATE_KEY,
execution_mode=og.LlmExecutionMode.TEE,
model_cid='meta-llama/Llama-3.1-70B-Instruct')
tools = get_agent_tools()
opengradient_agent = create_react_agent(opengradient_llm, tools)
events = opengradient_agent.stream(
{"messages": [("user", "hello")]},
stream_mode="values"
)
from swarm import Agent, Swarm
import opengradient as og
og.init(private_key=OG_PRIVATE_KEY)
# Create OpenAI compatible client
opengradient_client = og.openai_adapter(private_key=OG_PRIVATE_KEY)
# Create Swarm client
swarm_client = Swarm(client=opengradient_client)
# Create agent
opengradient_agent = Agent(
name="Trader agent",
instructions="your job is to...",
functions=[get_agent_tools()],
model='meta-llama/Llama-3.1-70B-Instruct',
)
# Execute agent
response = swarm_client.run(
agent=opengradient_agent,
messages=[{
"role": "user",
"content": "hello,
}],
)
TIP
Refer to our Quick Start guide for more info.
Below is an example of an agent reasoning trace on our block explorer:
View trace on OpenGradient explorer
Supported Models
Visit the Model Hub to see available LLMs.