
Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic's designation as a supply chain risk (Axios)
THE SO WHAT
When the NSA and DoD keep using a tool after it's tagged as a supply chain risk, the real signal is that operational demand for frontier models is outrunning policy and vendor-risk frameworks. If you sell into defense or critical infra, assume your buyers will quietly route around formal bans for capability they consider mission-critical — your job is to design for that gray zone, not pretend it doesn't exist.
READ THE SOURCE
MORE FROM THE WIRE
Applied AI'It’s a bit surprising': Why the majority of businesses are still sending marketing emails without tracking ROI
If you're still blasting email without ROI tracking while only using AI for copy, you're not doing marketing, you're burning margin. Treat email as a performance channel this quarter or cut the spend and reallocate to surfaces where you can actually close the loop.
Uber's AI Push Hits a Wall–CTO Says Budget Struggles Despite $3.4B Spend
A $3.4B AI bill with budget strain is the new warning label on undisciplined "AI everywhere" strategies—experimentation without ruthless kill criteria just turns into opex bloat. If you don't have a clear path from model spend to unit economics, you need a portfolio review this quarter, not another pilot.
Applied AIThe Harsh Glare in the Apple WWDC 26 Logo Is Teasing the Look of New Siri, Report Says
A redesigned Siri isn't about aesthetics — it's Apple signaling that the assistant is becoming a primary UX surface, not a bolt-on feature. If you're building on iOS, assume the OS-level agent will intermediate more user intent and start planning for a world where your app is a capability Siri orchestrates, not a destination users tap.
Applied AI'They are not human': Why AI has 'predictable and systematic biases' when it comes to judging people
Structured trust models mean AI will discriminate the same way, every time — demographic bias becomes an engineered property, not a side effect. If you're using AI for hiring, lending, or moderation, you now own a repeatable bias function that regulators and plaintiffs can interrogate line by line.