High-Bandwidth Humans Fail Online
A follow-up on why digital platforms suppress meaning by design, how “engagement” trains self-blame, and why real connection only happens offline.

This text follows directly from my previous article, not because that argument was incomplete per se, but because I feel it needs to be sharpened until it becomes uncomfortable. The meaning crisis is not theoretical, cultural, or philosophical; it is structural. It is enforced daily through systems that pretend to connect people while quietly dissolving every condition under which meaning could actually emerge. With my earlier piece explaining why explanation fails, this one names what replaces explanation in practice: platforms that hollow out responsibility, flatten relevance, and train people to mistake visibility for value.
It is important to be explicit here. Social media platforms do not fail to produce depth by accident. They do not accidentally discourage thought, reflection, or genuine interaction. They are structurally incapable of supporting these things because they threaten the only metric that matters: retention. Anything that slows consumption, redirects attention away from the platform, or asks people to sit with complexity is treated as a birth defect. External 3rd party links are suppressed because they break containment. Long-form thought is deprioritized because it cannot be skimmed. Silence is punished because it cannot be monetized through the same rotten foundation.
What makes this system particularly insidious is that it never presents itself as coercive. It presents itself as newfound opportunity, a potential for absolutely everything. All you got to do is post more, engage with more, and of course, be consistent. If nothing happens, the implication is clear: you did it wrong. Low engagement becomes a moral signal. Visibility becomes synonymous with relevance. People internalize a structural outcome as personal inadequacy and begin to self-censor accordingly. Over time, they do not need to be silenced. In the end, they silence themselves. Did you ever think of that?
This is where optimism becomes poisonous. Not hope in the existential sense, but the shallow optimism that insists persistence will be rewarded if you just play along. This belief keeps people trapped far longer than outright repression ever could. The system does not need to tell you that your ideas are unwelcome. It only needs to ignore them long enough for you to doubt yourself. In that sense, low engagement is not neutral feedback; it is a type of behavioral training.
The same dynamic plays out across so-called professional and intellectual platforms—LinkedIn and even Substack fall into this category. Spaces that claim to foster discussion tend to quickly collapse into performative affirmation and recycled consensus. Everything is framed as growth, leadership, or value creation, yet nothing of substance is allowed to disrupt the tone. Disagreement becomes a liability. Ambiguity becomes weakness. Thought is reduced to slogans because slogans are safe, shareable, and empty enough not to threaten anything.
The claim that these platforms are “free” is one of the most effective misdirections ever normalized. You do not pay with money up front, but you pay with something far more valuable: behavioral data, relational maps, and predictive patterns that extend well beyond anything you consciously are aware of providing. Contact syncing in phone application like Whatsapp, Telegram, Signal, etc. generates shadow profiles of people who never consented—either to the company or to you for uploading their information. Location data reconstructs routines. Social graphs infer beliefs, vulnerabilities, and future decisions. It is basic system design, and surprisingly easy to implement with a high level of effectiveness.
What makes this especially grotesque is the contrast between what these machines are capable of and what they are used for. We built computational systems powerful enough to model physical reality, simulate biological processes, and explore questions about consciousness, origin, and structure. And then, in all its irony, we deploy that power to optimize outrage cycles, maximize dopamine loops, and induce compulsive scrolling of basically near 100% of the global population. Then we talk about how AI’s, and technology in general, electricity usage and environmental impact like we actually care. Oh do fuck off, please. Does this not sound absolutely ridiculous? Cultural surrender at its peak.
People often respond by pointing out that there are online communities that work. This is technically true and practically misleading. Online spaces can function when they extend something that already exists offline. When people share embodied history, mutual obligation, and real consequence, digital tools can support coordination and continuity. But trying to generate gravity purely online, or online first, is fantasy. Trust does not emerge from endless feeds. Responsibility does not arise from anonymity. Meaning does not survive abstraction cycles.
This is why role models cannot be scaled digitally. A role model should never become a curated output stream. It is someone you observe across boredom, failure, contradiction, and pressure. Someone whose limitations you can actually see, not just their conclusions. The internet only shows you what is selected. It cannot transmit presence, accountability, or consequence. The more people look online for figures to emulate, the more distorted their expectations of life become.
The wider effect of this architecture is visible everywhere. Information density increases while orientation collapses. Communication accelerates while understanding disintegrates. People are constantly stimulated and permanently isolated. They know everything about distant catastrophes and nothing about the people sitting next to them. They are emotionally mobilized about issues they cannot influence and disengaged from the few domains where their actions might actually matter.
You do not need scientific proof to notice this—in fact, what we tend to call science has contributed for a large part to this limited reality. You only need to pay attention to how conversations unfold nowadays, how quickly disagreement escalates and ego takes dominance, and how rarely anything changes. The world feels more divided not because people care more, but because they are trapped in systems that convert attention into conflict. This division is not accidental; it is profitable and so it will continue to do so.
The most tragic part is that many people sense this but feel powerless to step outside it. They might tell themselves that participation is unavoidable, that visibility is necessary, that disengagement equals irrelevance. They have got it all completely backwards. The more tightly you bind your sense of meaning to systems designed to extract rather than to support you, the more hollow and meaningless everything becomes. Withdrawal, in this context, is not a lack of interest or concern. It is the first act of emerging clarity.
Conversation, in its simplest physical form, remains more powerful than every platform combined. Natural sound and light waves do not harvest data. Presence does not (yet) require analytics to spread with meaning. Two people in the same space can exchange more reality in five minutes than a thousand posts ever will. The fact that this now sounds almost radical should be alarming.
Let this realization drive you to a loving1, and offline, 2026.
Closing Note
Meaning does not return through comfort. It returns through friction, proximity, and responsibility—conditions the internet is structurally incapable of providing at scale. NEXUS treats its online presence as secondary by design. The writing and website exist as orientation points, not as engines of growth or validation. The real work is deliberately physical, local, and embodied. That is also why the next phase for me will unfold itself in Berlin—not to build an audience of followers, but to create environments where people can actually meet, argue, collaborate, and take responsibility in real space. Online comes last, if at all. Anything else is pretending it still matters.
Crossing the Threshold
On January 18th, from 19:00 to 20:30 CET, we’re hosting our first webinar: Crossing the Threshold. It’s about moving beyond the familiar 3D script into a grounded, autonomous way of living—how to truly hold yourself in that space.
We won’t dissect this article directly, but the themes overlap. If this piece resonated, this session will take it deeper. Tickets are available at TheNexusFormula.com
See you there!
As in existing through acts of self-knowing, contributing to the whole by means of creation. To truly know is to truly love. I will write something on this later.


I am currently reading Behold a Pale Horse by William Cooper (1991). What Cooper described decades ago is the deliberate introduction of social engineering—not through force, but through conditioning, framing, and internalized self-censorship.
LinkedIn is very much part of that same architecture. It presents itself as professional and neutral, yet it subtly shapes behavior, visibility, and conformity. Depth, deviation, or genuine ambiguity are not punished openly; they are simply rendered invisible. Structurally.
What Wout is pointing at is therefore not new, but a refined continuation of an old design—now normalized, internalized, and largely invisible to those operating inside it.
If we do not act in concert with one another, then whatever unfolds will simply become our fate by default. Reality is not at all what we perceive it to be—but what we collectively fail to question, challenge, and consciously shape.