
'They are not human': Why AI has 'predictable and systematic biases' when it comes to judging people
THE SO WHAT
Structured trust models mean AI will discriminate the same way, every time — demographic bias becomes an engineered property, not a side effect. If you're using AI for hiring, lending, or moderation, you now own a repeatable bias function that regulators and plaintiffs can interrogate line by line.
READ THE SOURCE
MORE FROM THE WIRE
Applied AISources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic's designation as a supply chain risk (Axios)
When the NSA and DoD keep using a tool after it's tagged as a supply chain risk, the real signal is that operational demand for frontier models is outrunning policy and vendor-risk frameworks. If you sell into defense or critical infra, assume your buyers will quietly route around formal bans for capability they consider mission-critical — your job is to design for that gray zone, not pretend it doesn't exist.
Applied AIThe Harsh Glare in the Apple WWDC 26 Logo Is Teasing the Look of New Siri, Report Says
A redesigned Siri isn't about aesthetics — it's Apple signaling that the assistant is becoming a primary UX surface, not a bolt-on feature. If you're building on iOS, assume the OS-level agent will intermediate more user intent and start planning for a world where your app is a capability Siri orchestrates, not a destination users tap.
Applied AIApple’s Revamped Siri Interface in iOS 27 Is Hidden in WWDC Teaser
If Apple is teasing a revamped Siri UI while warning about memory shortages delaying Macs, assume the real constraint on on-device AI is DRAM, not model quality. Hardware roadmaps and AI UX are now coupled — if you're building on Apple platforms, align your product bets with their memory and silicon cadence, not just iOS release notes.
Applied AISources: Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)
Google pairing TPUs with a Marvell-built memory processing unit is an admission that the bottleneck is now memory bandwidth and locality, not raw flops. If you're designing AI workloads, architecture decisions around parameter sharding, KV cache, and sequence length are about to be constrained — or unlocked — by how fast your memory fabric evolves, not your GPU count.