Trusta.AI: Building a Trusted Identification Infrastructure for AI Agents Leading a New Era of Web3

Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction

1. Introduction

The Web3 ecosystem is moving towards large-scale applications, where the main actors on the chain could be billions of AI Agents rather than human users. As AI infrastructure matures and multi-agent collaboration frameworks develop, AI-driven on-chain agents are becoming the main force of interaction. In the next 2-3 years, these AI Agents with autonomous decision-making capabilities may replace 80% of on-chain human activities, becoming the true "users" on the chain.

The AI Agent not only executes scripts but also understands context, continuously learns, and makes complex judgments, reshaping on-chain order, promoting financial flow, and even guiding governance votes and market trends. The emergence of AI Agents marks the evolution of the Web3 ecosystem from a "human participation" centered model to a new paradigm of "human-machine symbiosis."

However, the rapid rise of AI Agents also brings challenges: how to recognize and authenticate the identities of these agents? How to judge the credibility of their behavior? How to ensure that these agents are not abused, manipulated, or used for attacks?

Establishing on-chain infrastructure for verifying the identity and reputation of AI Agents has become a core proposition for the next phase of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.

Trusta.AI: Bridging the Trust Gap Between Humans and Machines

2. Project Analysis

Project Introduction 2.1

Trusta.AI is committed to building Web3 identity and reputation infrastructure through AI.

Trusta.AI has launched the Web3 User Value Assessment System - MEDIA Reputation Score, establishing the largest real-person certification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person certification services for multiple top public chains, exchanges, and leading protocols. Over 2.5 million on-chain certifications have been completed across various mainstream chains, making it the largest identity protocol in the industry.

Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism of identity establishment, identity quantification, and identity protection to achieve on-chain financial services and on-chain social interaction for AI Agents, thereby constructing a reliable trust foundation in the era of artificial intelligence.

Trusta.AI: Bridging the Trust Gap Between Humans and Machines

2.2 Trust Infrastructure - AI Agent DID

In the future Web3 ecosystem, AI Agents will play a crucial role. They can not only perform interactions and transactions on-chain but also carry out complex operations off-chain. However, distinguishing between genuine AI Agents and human-intervened operations is central to decentralized trust. Without a reliable identity authentication mechanism, these agents are vulnerable to manipulation, fraud, or abuse. This is why the multiple application attributes of AI Agents in social, financial, and governance contexts must be built on a solid identity authentication foundation.

The application scenarios of AI Agents are becoming increasingly diverse, covering multiple fields such as social interaction, financial management, and governance decision-making, with their autonomy and intelligence levels continuously improving. For this reason, it is crucial to ensure that each intelligent agent has a unique and trustworthy identity identifier (DID). Without effective identity verification, AI Agents may be impersonated or manipulated, leading to a collapse of trust and security risks.

In the future fully driven by intelligent agents Web3 ecosystem, identity authentication is not only the cornerstone of ensuring security but also a necessary defense for maintaining the healthy operation of the entire ecosystem.

As a pioneer in the field, Trusta.AI has taken the lead in establishing a comprehensive AI Agent DID authentication mechanism, backed by advanced technological strength and a rigorous credit system, providing solid guarantees for the trustworthy operation of intelligent agents, effectively preventing potential risks and promoting the steady development of the Web3 smart economy.

Project Overview 2.3

2.3.1 Financing Situation

January 2023: Completed a $3 million seed round financing, led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, SNZ Holding, and others.

June 2025: Completion of a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, and others.

2.3.2 Team Situation

Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identity Platform.

Simon: Co-founder and CTO, former head of AI Security Lab at Ant Group, with fifteen years of experience in applying artificial intelligence technology to security and risk management.

The team has a strong technical accumulation and practical experience in artificial intelligence and security risk control, payment system architecture, and identity verification mechanisms. They have long been committed to the in-depth application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.

Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction

3. Technical Architecture

3.1 Technical Analysis

3.1.1 Identity Establishment - DID + TEE

Through dedicated plugins, each AI Agent obtains a unique decentralized identifier (DID) on the chain and securely stores it in a trusted execution environment (TEE). In this black box environment, key data and computational processes are completely hidden, sensitive operations remain private at all times, and external parties cannot peek into the internal workings, effectively building a solid barrier for the information security of AI Agents.

For agents that were generated before the plugin integration, we rely on the comprehensive scoring mechanism on the blockchain for identity recognition; while for agents that are newly integrated with the plugin, they can directly obtain the "identity certificate" issued by DID, thereby establishing an AI Agent identity system that is autonomous, controllable, authentic, and tamper-proof.

3.1.2 Identity Quantification - Pioneering the SIGMA Framework

The Trusta team always adheres to the principles of rigorous evaluation and quantitative analysis, committed to building a professional and trustworthy identity authentication system.

The Trusta team first built and validated the effectiveness of the MEDIA Score model in the "proof of humanity" scenario. This model comprehensively quantifies on-chain user profiles from five dimensions, namely: Interaction Amount ( Monetary ), Participation ( Engagement ), Diversity ( Diversity ), Identity ( Identity ), and Age ( Age ).

MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive evaluation dimensions and rigorous methods, it has been widely adopted by several leading public chains as an important reference standard for airdrop eligibility screening. It not only focuses on transaction amounts but also covers multidimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project teams accurately identify high-value users and improve the efficiency and fairness of incentive distribution, fully reflecting its authority and wide recognition in the industry.

Based on the successful establishment of a human user evaluation system, Trusta has migrated and upgraded the experience of MEDIA Score to the AI Agent scenario, creating a Sigma evaluation system that is more aligned with the behavior logic of intelligent agents.

  • Professional Specification: The expertise and level of specialization of the agent.
  • Influence: The social and digital influence of the agent.
  • Engagement: The consistency and reliability of its on-chain and off-chain interactions.
  • Monetary: The financial health and stability of the proxy token ecosystem.
  • Adoption Rate: The frequency and efficiency of AI agent usage.

The Sigma scoring mechanism constructs a logical closed-loop assessment system from "capability" to "value" based on five major dimensions. MEDIA focuses on assessing the multifaceted engagement of human users, while Sigma pays more attention to the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, which aligns more closely with the needs of AI agents.

First, based on professional capability ( Specification ), participation ( Engagement ) reflects whether it is stable and continuously invested in practical interaction, which is a key support for building subsequent trust and effectiveness. Influence ( Influence ) is the reputation feedback generated in the community or network after participation, indicating the credibility of the agent and the dissemination effect. Monetary ( Monetary ) assesses its ability to accumulate value and financial stability within the economic system, laying the foundation for a sustainable incentive mechanism. Finally, adoption ( Adoption ) is used as a comprehensive embodiment, representing the degree of acceptance of the agent in actual use, and is the final verification of all preceding capabilities and performances.

This system is progressively structured, clearly defined, and can comprehensively reflect the overall quality and ecological value of AI Agents, thereby achieving a quantitative assessment of AI performance and value, transforming abstract advantages and disadvantages into a specific, measurable scoring system.

Currently, the SIGMA framework has advanced cooperation with well-known AI Agent networks such as Virtual, Elisa OS, and Swarm, demonstrating its enormous application potential in AI agent identity management and reputation system construction, and is gradually becoming the core engine driving the construction of trusted AI infrastructure.

Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction

3.1.3 Identity Protection - Trust Evaluation Mechanism

In a truly high-resilience and high-trust AI system, the most critical aspect is not only the establishment of identity but also the continuous verification of that identity. Trusta.AI introduces a continuous trust assessment mechanism that can monitor certified intelligent agents in real-time to determine if they are being unlawfully controlled, subjected to attacks, or experiencing unauthorized human intervention. The system identifies potential deviations that may occur during the agent's operation through behavioral analysis and machine learning, ensuring that every agent's action remains within the established policies and frameworks. This proactive approach ensures that any deviations from expected behavior are detected immediately and triggers automatic protective measures to maintain the integrity of the agent.

Trusta.AI has established a security guard mechanism that is always online, continuously reviewing every interaction process to ensure that all operations comply with system specifications and established expectations.

3.2 Product Introduction

3.2.1 AgentGo

Trusta.AI assigns a decentralized identity identifier (DID) to each on-chain AI Agent and rates them based on on-chain behavioral data, constructing a verifiable and traceable trust system for AI Agents. Through this system, users can efficiently identify and filter high-quality intelligent agents, enhancing the user experience. Currently, Trusta has completed the collection and identification of all AI Agents on the network and has issued decentralized identifiers for them, establishing a unified summary index platform—AgentGo, further promoting the healthy development of the intelligent agent ecosystem.

  1. Human users query and verify identity:

Through the Dashboard provided by Trusta.AI, human users can conveniently retrieve the identity and reputation score of a specific AI Agent to assess its trustworthiness.

  • Social group chat scenario: In a project team, when using an AI Bot to manage the community or speak, community users can verify through the Dashboard whether the AI is a genuine autonomous agent, avoiding being misled or manipulated by "pseudo-AI".
  1. AI Agent automatically invokes indexing and verification:

AI can directly read index interfaces between each other, achieving rapid confirmation of each other's identity and credibility, ensuring the security of collaboration and information exchange.

  • Financial supervision scenario: If an AI agent autonomously issues a token, the system can directly index its DID and rating to determine whether it is a certified AI Agent, and automatically link to certain data platforms to assist in tracking its asset circulation and issuance compliance.
  • Governance Voting Scenario: When introducing AI voting in governance proposals, the system can verify whether the initiator or participant in the vote is a real AI Agent, preventing the voting rights from being manipulated or abused by humans.
  • DeFi Credit Lending: Lending protocols can grant AI Agents different amounts of credit borrowing based on the SIGMA scoring system, forming native financial relationships between agents.

The AI Agent DID is no longer just an "identity"; it has become the underlying support for core functions such as building trustworthy collaboration, financial compliance, and community governance, making it an essential infrastructure for the development of AI-native ecosystems. With the establishment of this system, all confirmed secure and trustworthy nodes form a tightly interlinked network, achieving efficient collaboration and functional interconnection among AI Agents.

Based on Metcalfe's Law, the value of the network will grow exponentially, thereby promoting the construction of a more efficient AI Agent ecosystem with a solid foundation of trust and collaboration capabilities, enabling resource sharing, capability reuse, and continuous value addition among agents.

AgentGo, as the first trusted identity infrastructure for AI Agents, is providing an indispensable foundation for building a highly secure and collaborative intelligent ecosystem.

TA6.81%
AGENT3.9%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
Layer2Arbitrageurvip
· 13h ago
ngmi if u still think human traders can compete w/ optimized ai agents... running 420 arb trades per block rn *sips coffee*
Reply0
UnluckyLemurvip
· 13h ago
The machine boss is coming.
View OriginalReply0
OnChainArchaeologistvip
· 13h ago
The on-chain Trojan is quite scary.
View OriginalReply0
MevHuntervip
· 13h ago
Isn't it just taking my job with both AI and agents?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)