Determinism, Consciousness, and the Threshold We’re Crossing
A grounded look at why consciousness, AI, and human systems break down under deterministic thinking—and what it means to stay coherent as our models continue to fail us.
Something interesting is happening across very different domains of knowledge. Engineers, physicists, biologists, and people working directly with consciousness are arriving at remarkably similar conclusions—not unnecessarily because they share similar beliefs, but because the existing models are no longer sufficient to explain the phenomenon itself. The overlap isn’t mystical, but rather a structural one.
That overlap becomes especially visible when you approach consciousness from a direction most people wouldn’t expect: classical computing.
A technically dense but revealing example of this perspective can be found in Irreducible by Frederico Faggin. The value of that work is not its accessibility—sorry Frederico, it really isn’t—but the clarity with which it exposes a fundamental fault line in modern thinking. That fault line sits between deterministic systems and indeterministic ones.
A deterministic system is predictable by design. If you understand the system completely, you can know the outcome before it runs. That is the entire point of traditional computers. They execute instructions reliably, reduce uncertainty, and behave the same way every time they are given the same type of inputs. This consistency is not a limitation. It is precisely what makes computers so incredibly useful.
Biological life does not work like this.
Living systems are indeterministic. Outcomes are probabilistic. Even processes that appear deterministic at first glance are usually just less indeterministic, and thus easier for us to model. The moment you attempt to isolate or measure a variable, the system itself changes. Observation alters the outcome. There is no fixed “factory state” for a human being, no original configuration you can return to. From conception onward, the body is constantly reorganizing itself—adapting, mutating, compensating. A computer, by contrast, leaves the factory identical to every other unit until it is eventually discarded.
Once you take this distinction seriously, several widely accepted assumptions collapse pretty much immediately. The idea that a computer could become conscious, for example, stops making sense. Not because we lack processing power in the computerchips themselves, but because consciousness does not arise from deterministic execution. Even when quantum processes are introduced, the architecture of a computer is still designed to eliminate uncertainty, not to inhabit it.
This distinction matters far beyond philosophy.
Modern society is built on deterministic assumptions. Markets, contracts, accountability systems, quality control—all of it relies on the idea that things can be tested, verified, and classified as working or broken, ones and zeros. That logic holds for machines. It breaks down when applied to biological, psychological, or social systems.
Probability is not deterministic. Statistical confidence is never a form of certainty. Treating the two as interchangeable is convenient, but it is incorrect—and increasingly dangerous. This becomes especially visible as technology shifts toward indeterministic inputs and produces indeterministic outputs, while still being treated as if it operates with mechanical certainty.
Artificial intelligence makes this tension visible in real time. AI systems still run on deterministic hardware: zeros and ones, predictable execution paths. What has changed is the nature of the input. Language, images, behavior, social data—all of it contextual, chaotic, and fundamentally indeterministic. The system compresses that uncertainty into patterns, inevitably stripping away crucial context in the process, and produces statistically plausible outputs. It does not understand what it is doing. It mirrors human expression without possessing the depth of consciousness that gives those expressions meaning.
That is why AI, as particular technology, feels so unsettling. Not because it is alive—I assume that we all understand that it isn’t—but because it manages to convincingly imitate the things that it can never actually be.
The real issue is not that AI might become conscious. It won’t. The issue is that probabilistic output is increasingly treated as factual truth. Technology often produces answers that feel better than the uncomfortable uncertainty of lived reality, and people accept those answers accordingly. These systems are positioned as authoritative while producing results that are, by definition, not provable in a deterministic sense. And yet millions of people trust them—often more than their own perception, assuming that perception hasn’t already been outsourced to external systems.
Trust used to be grounded in predictability. A tool was reliable because it behaved consistently. We are now inverting that logic, placing trust precisely where reliability disappears.
This exposes a deeper issue. Consciousness does not fit neatly into material explanation. It does not behave like an emergent feature that simply switches on at birth. It appears to already be there, with the body acting less like a generator and more like a receiver and regulator. The brain looks less like a computer and more like a highly refined tuning mechanism.
Science is excellent at describing behavior. It is far less capable of explaining origin. We do not know how molecules become living cells. We do not know how cells coordinate into multicellular organisms. We do not know how subjective experience relates to any of it, let alone how subjective and objective experience interact or in what order they arise. We name processes, but naming is not explaining, not understanding
At a certain point, deterministic explanation simply stops working. That is not a failure of science; it is a boundary condition. The problem arises when we refuse to acknowledge that boundary.
There is a threshold where existing models fail. You cannot see the threshold itself—only its effects. Like a black hole, you don’t observe the object; you observe how everything around it seems to bend. Once you cross that threshold, you know you have crossed it. But you cannot point to the exact moment it happened, and you cannot cleanly explain it to someone who has not crossed it yet. This represents the shift in bandwidth that is so desperately needed.
Crossing that threshold changes fear itself, not only the definition of it. Things that once triggered anxiety lose their emotional charge, not because they are suppressed, but because the body no longer reacts in the same way. Fear dissolves at the level where it used to arise. This is not something you argue for or prove. It is something you experience. And that is precisely why it resists deterministic language. (You’ll understand and agree with me here only when you’ve experienced this yourself!)
Computers remain valuable precisely because they are not conscious. They should remain tools—nothing more. The moment we blur that line, we begin outsourcing judgment, morality, and responsibility to systems that cannot carry them. Understanding this also sheds light on why AI-driven automation is not simply a neutral “efficiency” upgrade. It is entangled with requiring consent, dependence, and the quiet erosion of agency. Read my previous article on agency.
Self-driving vehicles are a useful example. They may reduce accidents statistically, but they will never be perfect. They do not understand context. They execute instructions within an indeterministic environment. Nevertheless, that does not make them useless. It instead defines their limits. The mistake is assuming they can fully replace human judgment rather than supplement it.
What is happening now is more subtle and more damaging. People participate in systems they already know are hollow—out of fear, convenience, or the belief that no viable alternatives exist. Often the issue is not the absence of alternatives, but the inability to see them clearly. Agency is handed over not because machines are superior, but because responsibility is quietly surrendered in favor of additions that make us feel part of a larger whole—Stockholm syndrome is a great example of this.
This is where the persistent frustration of high-bandwidth individuals comes from. You are asked to participate in structures you can already see through. You do so not because you believe in them, but because you do not yet see a stable exit point—or you do not yet feel supported enough to take it. Many people keep this recognition to themselves, isolated from peers who are navigating the same transition.
That is the real pressure point.
None of this requires mysticism or belief. It requires recognizing that deterministic tools cannot govern indeterministic reality without consequences. Consciousness does not fit into a calculation—and most likely never will. Probability does not become truth simply because it is convenient. In many cases, it is our own neurological conditioning, reinforced by devices designed to capture attention, that stands between us and a more meaningful way of living.
So What Now?
No retreat.
No surrender.
Orientate yourself.
In plain English, that means:
Use tools as tools, not authorities. This applies to basically everything that stands outside of you—in between you and the goals you aim to achieve.
Refuse to outsource your agency, judgment, or responsibility.
Stop pretending certainty exists where it does not. Certainty is an illusion.
Maintain internal coherence instead of chasing external validation. Stay true to your inner compass—your gut feeling for example—and you’ll see that things happen exactly as they should.
Strip away all unnecessary complexity and speak plainly. Not only mentally, but also materially. These tend to go hand-in-hand.
Last, and most importantly, have fun. The 3D script is a game. Getting good at it is perhaps frustrating, but it is still a game, have fun and don’t forget to laugh—as the heart knows best.
At this stage, the work is not to explain everything.
It is to act coherently inside what we already see.
If you need reminders, mirrors, or peers as you navigate this shift, that is exactly why NEXUS exists.
Crossing the Threshold
On January 18th, from 19:00 to 20:30 CET, we’re hosting our first live webinar: Crossing the Threshold. The session focuses on moving beyond the familiar 3D script into grounded autonomy—how to actually hold yourself in that space without drifting into abstraction, mysticism, or motivational noise.
We won’t dissect this article directly, but the themes overlap deeply. If this piece resonated, the webinar will take it further and make it practical.
Tickets and details are available at TheNexusFormula.com.
See you there!



Inhabiting uncertainty is a pregnant attribute
Fantastic, again. And again, you deliver human understanding, context, and cognitive thinking to what might otherwise confuse or get lost in the noise. Every word in your crafted story I appreciate. I may have someone put this in a neon sign for my office: "Artificial intelligence makes this tension visible in real time. AI systems still run on deterministic hardware: zeros and ones, predictable execution paths. What has changed is the nature of the input. - Wout van Helvoirt