Skip to content

OpenGradient Use-cases

OpenGradient unlocks a world of new possibilities for trustless applications, allowing developers to build AI agents, smarter applications, and optimized features that improve user experience. With SolidML for on-chain applications and the Python SDK for off-chain applications, developers can leverage verified, decentralized AI inference across a wide range of use cases.

We published an extensive blogpost on exciting verticals of applied AI in Web3 that the team is focusing on. For the latest updates on OpenGradient's R&D efforts, we recommend giving the blog section on our website a comprehensive read.

On-Chain Use Cases (SolidML)

Build AI-enabled smart contracts that execute model inference atomically within transactions:

DeFi & Trading

AI Agents & Automation

  • Autonomous AI Agents: LLM-backed agents that make decisions and interact with other contracts, with cryptographic proof of which prompts were used to take specific actions, enabling transparent and auditable agent behavior
  • Scheduled Model Execution: Use the Model Scheduler to run periodic inference tasks automatically (e.g., hourly volatility predictions, daily risk assessments)

Reputation & Identity

  • AI-Driven Reputation Scoring for DePIN: Use LLMs to assess and score reputation in decentralized physical infrastructure networks
  • Sybil Resistance: Leverage ML models for identity verification and Sybil attack prevention

Infrastructure

  • Next-Gen Crypto Wallets with AI-Enhanced User Experience: Wallets that use AI to provide better UX, security recommendations, and transaction insights

Off-Chain Use Cases

Build applications that leverage OpenGradient's verified inference infrastructure without deploying smart contracts. LLM inference uses TEE (Trusted Execution Environment) verification, providing cryptographic guarantees and hardware attestation:

Enterprise & Web2 Applications

  • Decentralized and Privacy-Preserving LLMs: Run LLM inference with full verification and transparency, ensuring model integrity without exposing proprietary data
  • Blockchain-Secured Deployment of Proprietary AI Models: Deploy proprietary models with cryptographic guarantees of execution integrity
  • AI-Based Fraud Detection for Fintech Applications: Use verified ML models for fraud detection with full auditability and transparency
  • Secure Compute for Healthcare AI: Healthcare applications that require verified, tamper-proof AI inference for critical decision-making with privacy protection and code verification through TEE attestation
  • Transparent and Decentralized AI Content Moderation: Content moderation systems with verifiable AI decisions and transparent model execution, providing cryptographic proof of moderation decisions

Mission-Critical Applications

  • DeFi Smart Contract Analysis: Analyze smart contracts with verified prompts for audit compliance, ensuring cryptographic proof of what prompts were sent to the LLM
  • Resolution and Decision Verification: Verify that resolutions or decisions used the correct prompt with accurate data inputs, ensuring fair and transparent outcomes with on-chain proof of the logic used
  • Financial Services AI: Build financial applications that require verifiable LLM inference with full audit trails for regulatory compliance
  • Regulatory Compliance Tools: Create AI systems that provide cryptographic proof of LLM interactions for audit and compliance requirements

Privacy-Sensitive Applications

  • Privacy-Preserving Chat Applications: Build chat applications that process sensitive information with TEE verification, ensuring data remains private and code execution is audited
  • Secure Customer Support: Deploy customer support systems that handle personal data with hardware-level attestation and confidential compute
  • Enterprise AI with Audit Trails: Build enterprise AI applications that require verifiable, tamper-proof LLM inference with full transparency

Automated Workflows

  • Scheduled ML Workflows: Deploy automated workflows that execute on a schedule with live oracle data, perfect for continuous monitoring and decision-making
  • Blockchain-Orchestrated Autonomous AI Agents: Agents that execute complex workflows across multiple models and data sources with full verification

Data Processing

  • Decentralized AI Decision-Making Processes: Critical decision systems that require verifiable, tamper-proof AI inference with full audit trails
  • Real-Time Model Inference: Applications that need fast, verified inference with competitive pricing and full integrity guarantees
  • AI Decision Documentation: Create systems that document AI decisions with verifiable prompts and responses for accountability
  • Blockchain-Secured LLM Services: Build LLM-as-a-Service offerings with payment-gated access and TEE verification for trustless AI inference

MemSync Use Cases

Build AI applications with persistent memory and long-term context management using MemSync, OpenGradient's long-term memory layer for AI:

Personalized AI Applications

  • Personalized Chatbots: Build chatbots that remember user preferences, past conversations, and context across sessions, providing truly personalized interactions
  • AI Assistants with Memory: Create virtual assistants that learn about users over time, remembering their preferences, interests, and important information
  • Context-Aware Applications: Develop applications that adapt based on user history, preferences, and stored memories for enhanced user experiences

Memory-Enhanced LLM Apps

  • Long-Term Context Management: Build LLM applications that maintain context beyond token limits by leveraging semantic search across stored memories
  • Multi-Session AI Applications: Create AI applications that maintain continuity across multiple sessions, remembering user interactions and preferences
  • Intelligent Memory Organization: Deploy applications that automatically extract, classify, and organize memories (semantic vs episodic) for optimal context retrieval

Enterprise & Consumer Applications

  • Customer Relationship Management (CRM) AI: Build CRM systems that remember customer interactions, preferences, and history across all touchpoints
  • Personalized Recommendation Systems: Create recommendation engines that learn from user behavior and preferences stored in long-term memory
  • Educational AI Tutors: Develop tutoring systems that remember student progress, learning patterns, and areas of difficulty for personalized instruction
  • Healthcare AI with Patient Memory: Build healthcare applications that maintain patient context and history across multiple interactions

Key Capabilities

All use cases benefit from OpenGradient's core capabilities:

  • Atomic Execution: Model inference executes atomically within transactions, ensuring state consistency
  • Native Verification: ML execution supports multiple verification methods (ZKML, TEE, Vanilla), while LLM execution uses TEE verification. Verification proofs and attestations are natively validated by the network
  • Model Scheduling: Automate periodic model execution with the Model Scheduler
  • Price Feeds: Access real-time and historical price data through integrated oracles for time-series models
  • Data Preprocessing: Built-in data preprocessing capabilities for preparing model inputs
  • Composability: Chain multiple models together using smart contract logic for complex workflows
  • Verifiable LLM Inference: LLM execution with TEE verification provides hardware attestation, privacy protection, and cryptographic proof of prompts for mission-critical applications
  • Long-Term Memory: MemSync provides automatic memory extraction, semantic search, and user profile generation for building personalized AI applications with persistent context