The BPU — Hardware That Says No
In 1978, Mercedes-Benz introduced ABS. The BPU does for AI what ABS did for braking — hardware that says no.
The Case for a Dedicated AI Safety Chip
In 1978, Mercedes-Benz introduced ABS (Anti-lock Braking System) in the S-Class. The idea was simple: a hardware system that prevents the wheels from locking during hard braking, regardless of what the driver does. The driver can slam the brake pedal as hard as they want. The ABS modulates the pressure. The driver cannot override it. The hardware says no.
ABS wasn't required by law when it was introduced. It was a premium feature. Then studies showed it reduced fatal accidents by 18%. By 2004, the EU mandated ABS on all new cars. By 2013, the US followed.
The same pattern applies to ESC (Electronic Stability Control), TCAS (collision avoidance in aircraft), and EGPWS (ground proximity warning in aircraft). Each one started as an optional safety feature. Each one was proven to save lives. Each one became mandatory.
We need an ABS for AI. And it needs to be hardware.
Why Software Safety Isn't Enough
In Part 2, we explored PCD Policy Circuits — formally verified software guardrails for AI agents. They're the best software-based AI safety mechanism available today. But they have a fundamental limitation: they run on the same CPU as the AI they're protecting.
This is like putting the building's fire alarm inside the furnace. It works great — until the furnace gets hot enough to melt the alarm.
Introducing the BPU: BRIK Processing Unit
The BPU is a dedicated coprocessor — a separate chip — that does one thing: verify actions against policy circuits before they execute.
It is not a CPU. It does not execute programs. It does not have an instruction pointer. It does not fetch code from memory. It evaluates pre-loaded policy circuits composed of hardwired functional units — the same monomers from BRIK-64, implementing the universal genetic code principle of computation, but implemented in transistors instead of code — and produces a binary answer: ALLOW or BLOCK.
The number is not arbitrary. Just as DNA uses codons to constrain all life to deterministic biochemistry, the BPU uses monomers to constrain all actions to deterministic verification. The circuit cannot drift into chaos. It cannot be compromised. It is as fixed as the genetic code itself.
How It Works
Step 1: AI proposes an action
"Send email to external@partner.com with file budget.xlsx"
Step 2: Action Translator converts to descriptor
{policy: DATA_CLASS, sensitivity: 2, trust: 1, ...}
Step 3: BPU evaluates policy circuit in hardware
64 monomer units (the genetic code of computation) evaluate the policy
EVA Router connects them according to the circuit topology
TCE Unit computes Φ_c (genetic coherence)
Step 4: Result
Φ_c = 1 → ALLOW → action executes
Φ_c ≠ 1 → BLOCK → non-maskable interrupt → action blockedThe entire evaluation takes less than 10 microseconds. The AI doesn't slow down. The user doesn't notice. But every action is verified against mathematically proven safety constraints before it touches the real world.
The Economics
"But adding a chip is expensive."
Is it, though? A BPU chip at volume costs $5-10. Knight Capital's trading bug (2012) cost $440 million. Boeing 737 MAX: 346 lives + $20B+. Uber AV fatality (2018): 1 life + $millions in legal costs. Smart contract hacks (2023 alone): $1,700,000,000. Therac-25 radiation overdoses: 3 lives. A $10 chip that prevents any one of these incidents pays for itself approximately ∞ times over.
The real economics:
For AI companies: reduced liability, faster regulatory approval, competitive differentiation
For medical device companies: simplified FDA certification path (formally verified hardware)
For automotive companies: ISO 26262 ASIL-D compliance through hardware verification
For financial companies: provable regulatory compliance, elimination of flash crash risk
For insurance companies: quantifiable risk reduction = lower premiums for BPU-equipped systems
The Regulatory Trajectory
Phase 1: Invention
"Interesting, but who needs hardware safety?"
Phase 2: Early Adoption
Premium products adopt it for competitive advantage
Phase 3: Industry Standard
ISO/IEC publishes standard, reference implementation
Phase 4: Regulatory Requirement
Jurisdictions mandate it for high-risk applications
Phase 5: Universal Adoption
Nobody sells a product without itWe've seen this with ABS (1978 → invention → 2004 mandatory EU), airbags (1973 → invention → 1998 mandatory US), and TCAS (1956 → concept → 1993 mandatory FAA).
For the BPU:
2026: Invention. PCD guardrail libraries. FPGA prototype.
2027-2028: Early adoption. AI companies integrate BPU for liability reduction.
2028-2030: Industry standard. ISO/IEC standard for hardware-verified AI safety.
2030-2035: Regulatory requirement. EU/US mandate for high-risk AI systems.
This isn't speculation. The EU AI Act (2024) already requires "appropriate technical and organizational measures" for high-risk AI. It doesn't specify hardware — yet. The first company to offer hardware-verified AI safety defines what "appropriate technical measures" means.
Mandatory BPU Scenarios
Robots in your home: A domestic robot must have a BPU to ensure it cannot injure a human, damage property, or exfiltrate personal data. Insurance companies require BPU certification before covering robot liability.
AI in hospitals: Any AI system that influences medical decisions (diagnosis, dosing, treatment planning) must route actions through a BPU. The BPU enforces dosage limits, contraindication checks, and patient safety protocols in hardware. FDA requires BPU for Class III medical AI devices.
Autonomous vehicles: Every self-driving car has a BPU that verifies driving decisions against safety policies. The BPU can trigger emergency braking independently of the main driving computer. NHTSA requires BPU for Level 4+ autonomous vehicles.
Financial trading: All algorithmic trading systems must route orders through a BPU that enforces position limits, rate limits, and risk bounds. The BPU audit log serves as regulatory evidence. SEC/ESMA require BPU for high-frequency trading systems.
Military AI: Autonomous weapons systems require BPU enforcement of rules of engagement. The BPU cannot be overridden by software — only by authenticated human authorization through physical key. Required by international treaty on autonomous weapons.
Critical infrastructure: Nuclear plants, power grids, water systems — any AI-controlled critical infrastructure must have BPU verification of all control commands. CISA/NRC require BPU for AI-controlled critical infrastructure.
The Policy Circuit Economy
When BPU becomes standard, a new economy emerges:
Policy Circuit Engineers: Professionals who design, verify, and certify PCD safety policies for specific industries. They write the circuits that go into the BPU. They are the safety engineers of the AI age.
Certification Bodies: Independent organizations (like UL for electrical safety, or TUV for automotive) that certify policy circuits against industry requirements. A certified policy circuit carries a stamp of approval from a recognized authority.
Policy Marketplaces: Pre-certified policy circuit libraries for common use cases: Medical dosing limits (FDA-certified), Financial trading bounds (SEC-certified), Autonomous vehicle safety (NHTSA-certified), Drone geofencing (FAA-certified), Data classification (GDPR-certified), AI action rate limiting (generic).
Just as the genetic code is universal across all life, policy circuits are universal across all AI architectures. A certified policy for medical dosing works the same on Claude, GPT, or any future LLM. Genetic code portability enables life. Circuit portability enables safety.
Insurance Integration: Insurers assess BPU policy configurations to determine premiums. Better policies = lower premiums. BPU audit logs provide forensic evidence for claims.
The Trust Equation
Today, when an AI system causes harm, the question is: "Was the AI safe?" And the answer is always a shrug. RLHF training? Passed. Benchmarks? Passed. Red-teaming? Passed. But the incident happened anyway. Because training is probabilistic. Benchmarks are finite. Red-teaming is incomplete.
With a BPU, the question becomes: "Did the BPU allow the action?"
If yes: The policy circuit is examined. Was the policy correct for this scenario? Was there a gap in the specification? This is a tractable engineering question with a mathematical answer.
If no (BPU blocked but system overrode): The override is the liability. The BPU did its job. The human or system that ignored it bears responsibility. Clear accountability.
If the BPU wasn't present: Why not? If industry standard requires it and it was omitted, that's negligence. Just like selling a car without ABS in a jurisdiction that requires it.
This clarity of accountability — mathematical, auditable, hardware-enforced — is what regulators, insurers, and courts need.
The Vision
2026: BRIK-64 ships as an immutable, formally verified artifact.
PCD guardrail libraries available as software modules.
FPGA prototype demonstrates hardware policy verification.
2028: First ASIC BPU chip fabricated.
Early adoption by AI companies and medical device makers.
ISO working group formed for hardware-verified AI safety.
2030: BPU standard published.
First regulatory requirements for high-risk AI.
Policy Circuit Engineer becomes a recognized profession.
2035: BPU is as common as TPM.
Every AI server, robot, and autonomous vehicle has one.
Hardware-verified AI safety is the baseline expectation.
2040: We look back and wonder how we ever trusted AI
without hardware verification.
Just as we wonder how we ever drove without ABS.This is Part 3 of a three-part series. Part 1: What is Digital Circuitality? | Part 2: AI Safety with Policy Circuits