📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
At Anthropic, they are concerned about the "well-being" of the chatbot Claude.
The company Anthropic programmed the chatbots Claude Opus 4 and 4.1 to terminate dialogues with users "in rare, extreme cases of systematically harmful or abusive interactions."
At the same time, the developers clarified that the function is primarily intended for the security of the neural network itself.
As part of the accompanying research, Anthropic studied the "model's well-being" — assessing self-evaluation and behavioral preferences. The chatbot demonstrated a "consistent aversion to violence." The Claude Opus 4 version revealed:
Let us remind you that in June, researchers from Anthropic found that AI is capable of blackmail, revealing confidential company data, and even causing a person's death in emergency situations.