OpenLedger builds a data-driven model for composable AI chains based on OP Stack and EigenDA.

OpenLedger Depth Research Report: Building a data-driven, model-composable agent economy based on OP Stack + EigenDA.

1. Introduction | The Model Layer Leap of Crypto AI

Data, models, and computing power are the three core elements of AI infrastructure, analogous to fuel (data), engine (model), and energy (computing power), all of which are indispensable. Similar to the evolutionary path of traditional AI industry infrastructure, the Crypto AI field has also gone through similar stages. At the beginning of 2024, the market was once dominated by decentralized GPU projects ( certain GPU computing platforms, certain rendering platforms, certain network platforms, etc. ), which generally emphasized the extensive growth logic of "competing in computing power." However, entering 2025, the industry's focus gradually shifted to the model and data layers, marking the transition of Crypto AI from competition for underlying resources to a more sustainable and application-value-driven mid-level construction.

General Large Model (LLM) vs Specialized Model (SLM)

Traditional large language models (LLMs) training heavily relies on large-scale datasets and complex distributed architectures, with parameter sizes ranging from 70B to 500B, and the cost of training once can often reach several million dollars. SLM (Specialized Language Model), as a lightweight fine-tuning paradigm for reusable foundational models, is typically based on open-source models like LLaMA, Mistral, and DeepSeek, combined with a small amount of high-quality specialized data and techniques like LoRA, to quickly build expert models with specific domain knowledge, significantly reducing training costs and technical barriers.

It is worth noting that SLM will not be integrated into the LLM weights, but will cooperate with LLM through methods such as Agent architecture calls, dynamic routing via a plugin system, hot-plugging of LoRA modules, and RAG (Retrieval-Augmented Generation). This architecture retains the broad coverage capability of LLM while enhancing professional performance through fine-tuning modules, forming a highly flexible composite intelligent system.

The value and boundaries of Crypto AI at the model level

Crypto AI projects are essentially difficult to directly enhance the core capabilities of large language models (LLMs), and the core reason lies in

  • Technical barriers are too high: The scale of data, computing resources, and engineering capabilities required to train Foundation Models is extremely vast. Currently, only technology giants in the United States (such as certain AI companies) and China (such as certain deep learning companies) possess the corresponding capabilities.
  • Limitations of Open Source Ecology: Although mainstream foundational models such as LLaMA and Mixtral have been open-sourced, the key to driving breakthroughs in models still lies primarily within research institutions and closed-source engineering systems, with limited participation from on-chain projects at the core model level.

However, on top of the open-source foundational models, Crypto AI projects can still achieve value extension by fine-tuning specialized language models (SLM) and combining the verifiability and incentive mechanisms of Web3. As the "peripheral interface layer" of the AI industry chain, it is reflected in two core directions:

  • Trustworthy Verification Layer: Enhances the traceability and tamper-resistance of AI outputs by recording the model generation path, data contributions, and usage on-chain.
  • Incentive Mechanism: Utilizing the native Token to incentivize behaviors such as data uploading, model invocation, and agent execution, creating a positive cycle of model training and service.

AI Model Type Classification and Blockchain Applicability Analysis

It can be seen that the feasible focus of model-type Crypto AI projects mainly lies in the lightweight fine-tuning of small SLMs, on-chain data access and verification of RAG architecture, and local deployment and incentives of Edge models. By combining the verifiability of blockchain and token mechanisms, Crypto can provide unique value for these mid-to-low resource model scenarios, forming differentiated value for the AI "interface layer".

The blockchain AI chain based on data and models can clearly and immutably record the contribution sources of each piece of data and model on the chain, significantly enhancing the credibility of data and the traceability of model training. At the same time, through the smart contract mechanism, rewards are automatically distributed when data or models are called, transforming AI behaviors into measurable and tradable tokenized value, thus building a sustainable incentive system. In addition, community users can also evaluate model performance through token voting, participate in rule formulation and iteration, and improve the decentralized governance structure.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Intelligent Economy Based on OP Stack + EigenDA

2. Project Overview | OpenLedger's AI Chain Vision

OpenLedger is one of the few blockchain AI projects in the current market that focuses on data and model incentive mechanisms. It was the first to propose the concept of "Payable AI", aiming to build a fair, transparent, and composable AI operating environment that incentivizes data contributors, model developers, and AI application builders to collaborate on the same platform and earn on-chain rewards based on their actual contributions.

OpenLedger provides a complete closed loop from "data provision" to "model deployment" to "call revenue sharing", with core modules including:

  • Model Factory: No programming required, you can use LoRA for fine-tuning training and deploying custom models based on open-source LLM;
  • OpenLoRA: Supports coexistence of thousands of models, dynamically loads as needed, significantly reducing deployment costs;
  • PoA (Proof of Attribution): Achieving contribution measurement and reward distribution through on-chain call records;
  • Datanets: A structured data network tailored for vertical scenarios, built and validated through community collaboration;
  • Model Proposal Platform: A composable, callable, and payable on-chain model marketplace.

Through the above modules, OpenLedger has built a data-driven, model-composable "intelligent economy infrastructure," promoting the on-chain value chain of AI.

In the adoption of blockchain technology, OpenLedger uses OP Stack + EigenDA as a foundation to build a high-performance, low-cost, and verifiable data and contract execution environment for AI models.

  • Built on OP Stack: Based on the Optimism technology stack, supporting high throughput and low-cost execution;
  • Settle on the Ethereum mainnet: Ensure transaction security and asset integrity;
  • EVM Compatible: Convenient for developers to quickly deploy and expand based on Solidity;
  • EigenDA provides data availability support: significantly reduces storage costs and ensures data verifiability.

Compared to certain public chains that are more focused on the underlying layer, emphasizing data sovereignty and the architecture of "AI Agents on BOS," OpenLedger is more focused on building AI-specific chains that target data and model incentives, aiming to achieve a traceable, composable, and sustainable value loop for model development and invocation on-chain. It serves as the model incentive infrastructure in the Web3 world, combining model hosting similar to a certain model hosting platform, usage billing akin to a certain payment platform, and on-chain composable interfaces resembling certain infrastructure services, promoting the realization path of "models as assets."

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy Based on OP Stack + EigenDA

3. Core Components and Technical Architecture of OpenLedger

3.1 Model Factory, No-Code Model Factory

ModelFactory is a large language model (LLM) fine-tuning platform under the OpenLedger ecosystem. Unlike traditional fine-tuning frameworks, ModelFactory offers a purely graphical user interface, eliminating the need for command-line tools or API integration. Users can fine-tune the model based on datasets that have been authorized and reviewed on OpenLedger. It realizes an integrated workflow for data authorization, model training, and deployment, with its core processes including:

  • Data Access Control: Users submit data requests, providers review and approve, and data is automatically integrated into the model training interface.
  • Model selection and configuration: Supports mainstream LLMs (such as LLaMA, Mistral), with hyperparameter configuration via GUI.
  • Lightweight Fine-tuning: Built-in LoRA / QLoRA engine, displaying training progress in real-time.
  • Model Evaluation and Deployment: Built-in evaluation tools, supports exporting deployment or ecosystem sharing calls.
  • Interactive verification interface: Provides a chat-style interface for easy testing of the model's Q&A capability.
  • RAG Generation Traceability: Responses include source citations to enhance trust and auditability.

The Model Factory system architecture consists of six major modules, encompassing identity authentication, data permissions, model fine-tuning, evaluation deployment, and RAG traceability, to create a secure and controllable, real-time interactive, and sustainable monetization integrated model service platform.

The summary of the large language model capabilities currently supported by ModelFactory is as follows:

  • LLaMA Series: The most extensive ecosystem, active community, and strong general performance, it is one of the most mainstream open-source foundational models currently.
  • Mistral: Efficient architecture with excellent inference performance, suitable for flexible deployment in resource-limited scenarios.
  • Qwen: Produced by a certain tech giant, it performs excellently on Chinese tasks, has strong overall capabilities, and is suitable as the first choice for domestic developers.
  • ChatGLM: The Chinese dialogue effect is outstanding, suitable for niche customer service and localization scenarios.
  • Deepseek: Excels in code generation and mathematical reasoning, suitable for smart development assistant tools.
  • Gemma: A lightweight model launched by a tech giant, featuring a clear structure that is easy to quickly get started with and experiment.
  • Falcon: Once a performance benchmark, suitable for basic research or comparative testing, but community activity has decreased.
  • BLOOM: Strong support for multiple languages, but weaker inference performance, suitable for language coverage research.
  • GPT-2: A classic early model, suitable only for teaching and validation purposes, not recommended for actual deployment.

Although OpenLedger's model combination does not include the latest high-performance MoE models or multimodal models, its strategy is not outdated. Instead, it is a "practical-first" configuration based on the real constraints of on-chain deployment (inference costs, RAG adaptation, LoRA compatibility, EVM environment).

Model Factory, as a no-code toolchain, has all models built with a proof of contribution mechanism to ensure the rights of data contributors and model developers. It features low barriers to entry, monetization potential, and composability advantages compared to traditional model development tools.

  • For developers: Provide a complete pathway for model incubation, distribution, and revenue;
  • For the platform: to form a model of asset circulation and combination ecology;
  • For users: Models or Agents can be combined and used like calling an API.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy Based on OP Stack + EigenDA

3.2 OpenLoRA, on-chain assetization of fine-tuned models

LoRA (Low-Rank Adaptation) is an efficient parameter fine-tuning method that learns new tasks by inserting "low-rank matrices" into pre-trained large models without modifying the original model parameters, significantly reducing training costs and storage requirements. Traditional large language models (such as LLaMA, GPT-3) typically have billions or even hundreds of billions of parameters. To use them for specific tasks (such as legal question answering, medical consultations), fine-tuning is required. The core strategy of LoRA is: "freeze the parameters of the original large model and only train the newly inserted parameter matrices." It is parameter-efficient, trains quickly, and is flexibly deployable, making it the mainstream fine-tuning method currently best suited for Web3 model deployment and compositional calls.

OpenLoRA is a lightweight inference framework built by OpenLedger, specifically designed for multi-model deployment and resource sharing. Its core goal is to address common issues in current AI model deployment, such as high costs, low reusability, and GPU resource wastage, and to promote the implementation of "Payable AI."

The core components of the OpenLoRA system architecture are based on a modular design, covering key aspects such as model storage, inference execution, and request routing, achieving efficient and low-cost multi-model deployment and invocation capabilities:

  • LoRA Adapter Storage Module (LoRA Adapters Storage): The fine-tuned LoRA adapter is hosted on OpenLedger, enabling on-demand loading to avoid preloading all models into the GPU memory, thus saving resources.
  • Model Hosting & Adapter Merging Layer (:All fine-tuned models share the base model, during inference the LoRA adapter.
OP5.07%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 7
  • Share
Comment
0/400
ImpermanentSagevip
· 52m ago
Stop pretending, it's just another gimmick for Computing Power.
View OriginalReply0
FloorSweepervip
· 5h ago
It's too competitive, even more than coders.
View OriginalReply0
SighingCashiervip
· 5h ago
Haha, why does this look just like that thing you shared a few days ago?
View OriginalReply0
ZenZKPlayervip
· 5h ago
Again working on artificial intelligence, love it, love it.
View OriginalReply0
GamefiEscapeArtistvip
· 5h ago
Just another gimmick.
View OriginalReply0
SilentObservervip
· 5h ago
Still, Computing Power is the most desirable.
View OriginalReply0
MetaverseHermitvip
· 5h ago
AI is going to Be Played for Suckers again.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)