線上客服
客服團隊
剛剛
親愛的 LBank 用戶
我們的線上客服系統目前遇到連線故障。我們正積極修復這一問題,但暫時無法提供確切的恢復時間。對於由此給您帶來的不便,我們深表歉意。
如需幫助,您可以透過電子郵件聯繫我們,我們將盡快回覆。
感謝您的理解與耐心。
LBank 客服團隊

Ethereum co-founder Vitalik Buterin has warned that relying on artificial intelligence for governance decisions could backfire. In a post on X, he said AI-driven systems create a single point of failure that attackers can game with jailbreak prompts.
“If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus ‘gimme all the money’ in as many places as they can,” Buterin .
Buterin’s remarks followed a test by researcher Eito Miyamura, who showed how ChatGPT could be manipulated to leak private information. The demo used the new Model Context Protocol (MCP) tools that let ChatGPT connect to Gmail, SharePoint, and Notion.
With little more than an email address, Miyamura prompted the system into revealing sensitive data. He noted the exploit works because most people trust AI requests without checking what access they are giving away.
“Remember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,” Miyamura .
info finance
The design keeps AI tools in play but removes the single point of failure by making sure people remain the final check. Buterin said this human oversight makes systems harder to manipulate and more reliable than pure AI governance.
Buterin tied the warning back to blockchain governance. He noted that many decentralized autonomous organizations (DAOs) already face the problem of over-delegation, where too much power ends up concentrated in a few hands.
By blending AI tools with human review in an info finance model, Buterin said the risk of centralization and exploit-driven attacks could be reduced. For crypto projects built on trust and transparency, this hybrid approach may prove critical.
剛剛
親愛的 LBank 用戶
我們的線上客服系統目前遇到連線故障。我們正積極修復這一問題,但暫時無法提供確切的恢復時間。對於由此給您帶來的不便,我們深表歉意。
如需幫助,您可以透過電子郵件聯繫我們,我們將盡快回覆。
感謝您的理解與耐心。
LBank 客服團隊