0
Applied AI·April 21, 2026·1 min read

Sam Altman opens up about the Molotov cocktail attack on his home: 'The way Anthropic talks about OpenAI doesn't help'

Share

Personal security risk for AI leadership is now entangled with inter-lab narrative warfare — 'doomerism' and 'fear-based marketing' aren’t just PR frames, they shape who gets targeted in the real world. Boards and comms teams need to treat safety rhetoric as part of physical risk management, not just positioning.

Applied AI

Source: a handful of unauthorized users in a private Discord channel have been accessing Anthropic's Mythos model since the day the company announced it (Rachel Metz/Bloomberg)

A frontier-grade model with offensive cyber capabilities leaking into a private Discord is the nightmare scenario for every lab and enterprise running sensitive models. Treat access control, logging, and key management for high-capability models like you would production payment rails — not like another SaaS login.