BRIK64
Back to Blog
RESEARCHFEB 20, 2026

The Verification Gap — Why Software Is 50 Years Behind Hardware

In 1965, Gordon Moore predicted transistors would double every two years. Software verification never kept up. Until now.

And How Digital Circuitality Finally Closes It

In 1965, Gordon Moore predicted that transistors on a chip would double every two years. He was right. A modern processor has 100 billion transistors. Each one works correctly.

How is this possible?

Not through testing. You cannot test 100 billion transistors individually. You cannot test every possible combination of signals. The number of states exceeds the atoms in the universe.

Hardware works because of a property that software has never had: compositional verification. If Gate A is correct, and Gate B is correct, then the circuit A→B is correct. The correctness of the whole follows mathematically from the correctness of the parts.

This is not a minor technical detail. This is the foundational difference between an industry that scales reliably and one that doesn't.

The Numbers Nobody Talks About

The global cost of software bugs is estimated at $2.41 trillion per year (Consortium for IT Software Quality, 2022). Not million. Trillion.

The consequences of this gap are measured in billions of dollars, and sometimes in lives. From the Ariane 5 integer overflow to the Boeing 737 MAX sensor fusion error — the pattern is always the same: a calculation exceeded its domain, and nobody verified it at design time.

Why Software Is Different

In 1976, Edsger Dijkstra wrote: "Testing shows the presence of bugs, never their absence." Nearly 50 years later, the software industry still relies almost exclusively on testing.

The reason is structural. Software lacks three properties that hardware has always had:

1. Verified atomic components.Every logic gate — AND, OR, NOT, NAND — has a truth table that is mathematically complete. Given any input, the output is defined. There are no "undefined behaviors" in a NAND gate.

Software functions have no equivalent guarantee. A sorting function might work for arrays of length 1 to 10,000 but fail at 10,001. A hash function might produce correct results for ASCII but corrupt Unicode. The only way to know is to test — and you can never test everything.

2. Compositional correctness. When you connect two logic gates, the combined behavior is deducible from the truth tables of the individual gates and the wiring diagram. This is not an approximation. It is a mathematical fact.

When you compose two software functions, anything can happen. Function A might modify global state that Function B depends on. Function B might throw an exception that Function A doesn't handle. The interaction might create a deadlock, a race condition, or a memory leak that only manifests after 72 hours of continuous operation.

3. Closed circuits.A physical circuit either conducts or it doesn't. Current flows in a closed loop, or it flows nowhere. There is no third option.

Software has no equivalent concept. A program can produce partial results, leak resources, enter infinite loops, or silently corrupt data while appearing to work normally. There is no binary test for "does this program work?" because the question itself is ill-defined without a formal specification — which most software lacks.

What Would It Take?

What if software had all three properties? What if programs were built from verified components that composed according to algebraic laws and formed closed circuits?

This is not a theoretical question. The answer is Digital Circuitality.

Verified components: 64 atomic operations, each with a mathematical proof in Coq. Not tested — proven. The proof guarantees correct output for every possible input, deterministically, with no side effects.

Compositional correctness: EVA algebra defines three composition operators — sequential, parallel, and conditional — each of which preserves the correctness of its parts. If Component A is verified and Component B is verified, then their composition is verified. This is a theorem, not a hope.

Closed circuits: The Thermodynamic Coherence Engine (TCE) computes a closure metric Φc for every program. If Φc = 1, the circuit is closed: every input is consumed, every output is produced, every execution path terminates. If Φc ≠ 1, the program does not compile.

The Ariane 5 Through This Lens

Let's revisit the Ariane 5 failure through the lens of Digital Circuitality.

The bug: a 64-bit floating-point value representing horizontal velocity was converted to a 16-bit signed integer. The velocity exceeded 32,767 — the maximum value a 16-bit integer can hold. The conversion overflowed. The navigation system interpreted the overflow as flight data. The rocket veered off course. The self-destruct system activated.

In Digital Circuitality:

PC navigation {
    domain velocity: Range [0, 50000];
    domain output_16bit: Range [0, 32767];

    fn convert_velocity(vel) {
        // TCE: vel ∈ [0, 50000], output ∈ [0, 32767]
        // COMPILE ERROR: output domain [0, 50000] exceeds [0, 32767]
        // The circuit does not close. The program does not compile.
        return vel;
    }
}

The compiler would reject this program. Not because of a test. Not because of a code review. Because the domains are incompatible and the circuit doesn't close.

The domain declaration — Range [0, 50000] for velocity, Range [0, 32767] for output — makes the mismatch visible at design time. The engineer must either widen the output domain (use 32-bit) or narrow the input domain (clamp velocity). Either way, the decision is explicit and verified.

The Economics of Verification

"But formal verification is expensive," the argument goes. "We can't prove everything correct."

This argument was valid in 2005. It is no longer valid.

The BRIK-64 compiler performs verification at compile time, automatically, at the speed of compilation. There is no manual proof writing. There is no theorem prover to learn. You write PCD, declare your domains, and the compiler verifies closure.

The cost of writing domain velocity: Range [0, 50000]; is approximately 3 seconds of typing.

The cost of the Ariane 5 failure was $370 million and set the European space program back by a year.

The cost of the Boeing 737 MAX was $20 billion and 346 lives.

Closing the Gap

For 50 years, software has been an unverified craft practiced at industrial scale. We build systems that control aircraft, medical devices, financial markets, nuclear reactors, and autonomous vehicles — and we verify them with tests that cover, optimistically, 70% of execution paths.

Hardware closed this gap decades ago. Not because hardware engineers are smarter, but because they have better tools. Compositional verification. Closed circuits. Verified components.

Digital Circuitality brings these tools to software. Not as a research prototype. Not as a theoretical framework. As a compiler you can install today:

curl -L https://brik64.dev/install | sh
brikc run your_program.pcd

The verification gap has existed for 50 years. It doesn't have to exist for 50 more.

PCD — A Language Built for AI, Supervised by Humans

PCD — Printed Circuit Description — is not designed to replace Python or Rust. It is designed to be the language that AI agents write and the compiler automatically certifies.

With a finite set of operations and a bounded design space, an AI model can learn the entire PCD language in minutes. It generates programs. The BRIK-64 compiler verifies them. If the program is correct, it compiles to JavaScript, Python, Rust, native x86-64, or BIR bytecode. If it is incorrect, it does not compile.

The human's role is not to write PCD — it is to design the domains. The engineer declares: "velocity must be in [0, 900]", "temperature must be in [-40, 1200]", "confidence must be in [0, 100]." The AI writes the logic. The compiler verifies the boundaries. The human supervises the design. The circuit closes.

This is a fundamental shift. Instead of reviewing thousands of lines of AI-generated code, the engineer reviews domain declarations. Instead of trusting the AI's output, the compiler verifies it. Trust is replaced by proof.

The BPU — When Software Isn't Enough

Digital Circuitality does not stop at software. The logical endpoint is hardware: the BRIK Processing Unit (BPU) — a dedicated chip with 64 monomer gates in silicon, an EVA Router for composition, and a TCE Unit for real-time closure verification.

Imagine an AI system where safety policies are not software that can be overridden, updated, or bypassed — but hardware circuits that physically prevent unsafe actions. A non-maskable BLOCK signal that no software can override. Not because the AI chooses to obey. Because physics does not negotiate.

This is the trajectory: voluntary adoption → industry standard → regulatory requirement. Like ABS brakes in cars. Like circuit breakers in buildings. Like seatbelts. First, a good idea. Then, the law.

What Comes Next

Digital Circuitality is not a theoretical proposal. The BRIK-64 compiler is available today. The SDKs are published on npm, PyPI, and crates.io. The documentation is live. The certified math library enables exact computation at designer-specified precision. The domain declaration system allows engineers to define boundaries at design time.

What comes next is adoption. The first certified circuits in production systems. The first public registry of reusable, verified components. The first domain-specific circuit libraries: finance, aerospace, medical, autonomous driving. The first BPU prototypes.

And eventually, the question will not be "why verify?" but "why didn't we verify sooner?"

The verification gap has existed for 50 years. The tools to close it exist now.

Further reading: What if Software Worked Like DNA? | PCD for AI Agents | Why Your Calculator Is Lying to You