BRIK64
Back to Blog
AI SAFETYMAR 22, 2026

What Artificial Intuition Gets Right — And What It Can't Verify

Carlos E. Perez argues AI is intuition, not intelligence. He's right. But intuition without certification is a liability.

Carlos E. Perez argues AI is intuition, not intelligence. He's right. But intuition without certification is a liability.

The Intuition Revolution

Carlos E. Perez, a former IBM Watson Research engineer turned independent AI researcher, has been making an argument that most of the AI industry still hasn't internalized: deep learning is not artificial intelligence. It's artificial intuition.

In his Artificial Cognition trilogy and his Quaternion Process Theory (QPT), Perez extends Kahneman's dual-process model (System 1 fast, System 2 slow) by adding a second axis: Fluency versus Empathy. This produces four cognitive modes — fast-fluent pattern recognition, slow-fluent mathematical reasoning, fast-empathic social reading, and slow-empathic moral deliberation.

His central observation is devastating in its simplicity: today's large language models operate almost entirely on the Fluency axis. They complete patterns. They generate plausible text. They write code that looks correct. But they lack Empathy — the ability to model the internal states of other agents, to understand consequences from someone else's perspective.

LLMs don't reason. They intuit. And Perez is right about that.

The question is: what do you do with a machine that has intuition but no accountability?

The Problem Perez Identified

To his credit, Perez doesn't stop at diagnosis. He recognizes what he calls the verification bottleneck: AI systems now generate code, text, and decisions faster than any human can review them. The gap between generation speed and verification capacity grows wider every month.

His proposed solution involves autoformalization — using semantic embedding spaces to bridge informal human reasoning and formal mathematical proof. The idea is elegant: let the AI's intuition guide the search, then verify the result formally. Preserve the creative leap, but land on solid ground.

The problem is circularity.

When an AI generates code and then generates the tests for that code, who verified the verifier? When an AI produces a formal proof sketch and another AI checks it, you have two intuition machines agreeing with each other. That's not verification. That's consensus — and consensus among unreliable agents is not the same as correctness.

Perez has identified the right problem. But the solution he proposes still lives inside the same system it's trying to verify. The intuition checks itself. That's like asking a surgeon to grade their own exam.

Intuition Needs Bones

Consider a pilot with thirty years of experience. Their intuition is extraordinary — they can feel turbulence patterns, sense mechanical anomalies, make split-second decisions that save lives. No one questions the value of that intuition.

But no airline lets a pilot fly without TCAS (Traffic Collision Avoidance System). No matter how experienced the pilot, TCAS says "DESCEND NOW" and the pilot descends. The system doesn't debate. It doesn't negotiate. It overrides.

The same principle applies to cars. ABS doesn't care about your driving skills. ESC doesn't ask if you meant to oversteer. These systems exist because intuition — no matter how refined — operates on incomplete information and is subject to failure modes that the intuitive agent cannot predict.

The human body follows this same architecture. The brain is creative, adaptive, intuitive. But it sits inside a skeleton that constrains its range of motion. Bones don't think. They don't need to. They prevent the body from moving in ways that would destroy itself.

Intuition is the most powerful cognitive tool we have. But it needs structure that it cannot override.

Software built by intuitive AI agents has the same problem. The generation is impressive. The fluency is real. But without an independent structural layer that certifies correctness — without bones — the system is a brain floating in space, capable of anything, constrained by nothing.

The Circuit Layer

This is where Digital Circuitality enters the picture — not as a replacement for artificial intuition, but as its structural complement.

Digital Circuitality treats software as hardware: a finite set of formally verified atomic operations (monomers), composed through algebraic laws (EVA algebra), certified by an independent engine (TCE) that measures informational closure. When a program achieves Φc = 1, it means the circuit is closed — every input maps deterministically to an output, with zero information leakage.

The key insight is the finite space. An LLM can generate any program in any language — an infinite space where verification is undecidable. But when that program is analyzed by the BRIK-64 Lifter, it maps onto a bounded set of 64 core operations. In a finite space, exhaustive verification becomes possible.

What the Lifter can map to core monomers gets certified with Φc= 1 — deterministic, proven, permanent. What maps to extended monomers gets a CONTRACT score — trusted by agreement, not by proof. What can't be mapped at all stays outside the certified boundary, flagged as unverifiable.

The LLM intuits. The circuit certifies. These are different operations performed by different systems with different guarantees. The certification doesn't depend on the generator's opinion of its own work. It depends on algebraic properties that hold regardless of how the code was produced.

This is what breaks the circularity that Perez's autoformalization cannot escape. The verifier is not another AI. It's a mathematical structure.

Two Layers, One System

The intellectual frameworks of Carlos E. Perez and Digital Circuitality are not competing. They describe different layers of the same system.

Layer 1: Generation.This is Perez's domain. LLMs operate with artificial intuition — pattern completion, creative leaps, fluent production. QPT's four cognitive modes describe how these systems think (or approximate thinking). The Fluency axis generates code, text, decisions. The Empathy axis (when it arrives) will model consequences and stakeholder impact. This layer is powerful, creative, and fundamentally unreliable.

Layer 2: Certification.This is Digital Circuitality's domain. A finite algebra of verified operations. An independent coherence engine. Hardware enforcement through the BPU that cannot be bypassed, negotiated, or jailbroken. This layer is rigid, deterministic, and fundamentally trustworthy.

RLHF teaches an AI to want to do the right thing. Policy circuits prevent it from doing the wrong thing. Education can fail. Physics does not.

The two-layer model resolves a tension that neither side addresses alone. Pure intuition without verification produces impressive but untrustworthy output — the verification gap that costs $2.41 trillion annually in software failures. Pure verification without intuition produces correct but trivial programs — nobody wants to write everything in a 64-operation algebra by hand.

Together, they form something new: a system where AI generates ambitiously and structure certifies rigorously. The creative power of intuition, bounded by the guarantees of circuitry.

The Debt We Owe

Digital Circuitality did not emerge in a vacuum. Perez's Quaternion Process Theory helped shape its architecture in ways that deserve acknowledgment.

It was QPT's insistence on the Fluency–Empathy axis that clarified what verification alone cannot cover. When we designed the two-tier certification model — CORE (Φc = 1, deterministic) and CONTRACT (extended monomers, trusted by agreement) — we were drawing a line that QPT had already mapped: the boundary between what can be formally proven and what requires a different kind of trust.

The decision to separate the decision space from the execution space in BRIK-64's policy circuit architecture came directly from engaging with Perez's framework. Empathy operates in the decision space — what should the system do? Which action best serves the user's needs? Determinism operates in the execution space — given a decision, does the implementation produce the correct result? QPT made this distinction legible.

A pilot decides to divert based on intuition and empathy for passengers. TCAS verifies the new altitude is safe. These are not the same operation. They compose. That compositional insight — that safety requires both an intelligent proposer and a rigid verifier — owes something to the cognitive architecture Perez described.

Digital Circuitality provides the bones. Quaternion Process Theory helped us understand what the bones need to hold.

Carlos E. Perez publishes at Intuition Machine on Medium. His books on Artificial Intuition, Fluency, and Empathy are available on Amazon and Gumroad.