0
Applied AI·April 19, 2026·1 min read

'They are not human': Why AI has 'predictable and systematic biases' when it comes to judging people

Share

Structured trust models mean AI will discriminate the same way, every time — demographic bias becomes an engineered property, not a side effect. If you're using AI for hiring, lending, or moderation, you now own a repeatable bias function that regulators and plaintiffs can interrogate line by line.

Applied AI

Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic's designation as a supply chain risk (Axios)

When the NSA and DoD keep using a tool after it's tagged as a supply chain risk, the real signal is that operational demand for frontier models is outrunning policy and vendor-risk frameworks. If you sell into defense or critical infra, assume your buyers will quietly route around formal bans for capability they consider mission-critical — your job is to design for that gray zone, not pretend it doesn't exist.

Applied AI

Sources: Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)

Google pairing TPUs with a Marvell-built memory processing unit is an admission that the bottleneck is now memory bandwidth and locality, not raw flops. If you're designing AI workloads, architecture decisions around parameter sharding, KV cache, and sequence length are about to be constrained — or unlocked — by how fast your memory fabric evolves, not your GPU count.