🚀 Gate Fun Chinese Meme Fever Keeps Rising!
Create, launch, and trade your own Meme tokens to share a 3,000 GT!
Post your Meme on Gate Square for a chance to win $600 in sharing rewards!
A total prize pool of $3,600 awaits all creative Meme masters 💥
🚀 Launch now: https://web3.gate.com/gatefun?tab=explore
🏆 Square Sharing Prizes:
1️⃣ Top Creator by Market Cap (1): $200 Futures Voucher + Gate X RedBull Backpack + Honor Poster
2️⃣ Most Popular Creator (1): $200 Futures Voucher + Gate X RedBull Backpack + Honor Poster
3️⃣ Lucky Participants (10): $20 Futures Voucher (for high-quality posts)
O
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other's models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called "misalignment" issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.